ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-11 22:42
Elapsed25m55s
Revision
Buildergke-prow-containerd-pool-99179761-02q6
Refs master:08bee2cc
72682:d52ba641
72714:d0b35d1b
72797:28a6a446
72831:f62cc819
pod1c0a4f3b-15f2-11e9-a282-0a580a6c019f
infra-commitdd6aca2a4
pod1c0a4f3b-15f2-11e9-a282-0a580a6c019f
repok8s.io/kubernetes
repo-commitc81a3fa66fbb59644436ec515e20faadeed1eb13
repos{u'k8s.io/kubernetes': u'master:08bee2cc8453c50c6d632634e9ceffe05bf8d4ba,72682:d52ba6413dac9b5441ee6babb01df56c0d0a2c39,72714:d0b35d1b05bdeacbb5e4f0f42decf7f977d323a1,72797:28a6a446a14d064d8a85c3e59b3c77f2127be35b,72831:f62cc81934634433eb8c7dbfc5bf755247a8efeb'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestImageLocality 4.81s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestImageLocality$
I0111 23:02:11.995118  122083 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0111 23:02:11.995180  122083 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 23:02:11.995383  122083 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0111 23:02:11.995433  122083 master.go:229] Using reconciler: 
I0111 23:02:11.997719  122083 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:11.997957  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:11.998041  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:11.998130  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:11.998309  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:11.998772  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:11.999003  122083 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0111 23:02:11.999059  122083 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:11.999300  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:11.999324  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:11.999480  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:11.999606  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:11.999669  122083 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 23:02:11.999953  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.002296  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.002546  122083 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 23:02:12.002634  122083 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.002769  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.002831  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.002903  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.003050  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.003291  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.004295  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.004613  122083 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0111 23:02:12.004679  122083 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.004778  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.004815  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.004858  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.004944  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.005000  122083 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 23:02:12.005221  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.005558  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.006295  122083 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0111 23:02:12.006499  122083 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.006617  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.006679  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.006725  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.006889  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.006955  122083 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 23:02:12.007192  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.010074  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.010331  122083 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0111 23:02:12.010551  122083 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.010943  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.011205  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.011307  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.010641  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.011101  122083 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 23:02:12.011609  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.012093  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.012357  122083 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0111 23:02:12.012574  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.012666  122083 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.012817  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.012839  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.012908  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.012944  122083 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 23:02:12.013120  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.014937  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.015167  122083 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0111 23:02:12.015320  122083 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.015403  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.015425  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.015461  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.015538  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.015576  122083 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 23:02:12.015732  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.017293  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.017466  122083 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0111 23:02:12.018676  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.018732  122083 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 23:02:12.021072  122083 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.021285  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.021311  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.021399  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.021505  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.022410  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.022484  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.022608  122083 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0111 23:02:12.022768  122083 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.022853  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.022897  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.022939  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.022988  122083 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 23:02:12.025483  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.025878  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.026008  122083 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0111 23:02:12.026203  122083 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.026295  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.026326  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.026377  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.026479  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.026523  122083 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0111 23:02:12.026735  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.028852  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.029005  122083 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0111 23:02:12.029221  122083 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.029332  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.029357  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.029539  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.029624  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.029664  122083 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0111 23:02:12.029906  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.030230  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.030358  122083 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0111 23:02:12.030525  122083 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.030605  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.030627  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.030657  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.030748  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.030779  122083 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 23:02:12.030961  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.033784  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.033942  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.034281  122083 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 23:02:12.034968  122083 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0111 23:02:12.035257  122083 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.035959  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.035988  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.036104  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.036287  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.037309  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.037525  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.037697  122083 store.go:1414] Monitoring services count at <storage-prefix>//services
I0111 23:02:12.037769  122083 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0111 23:02:12.037777  122083 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.038080  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.038116  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.038196  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.038320  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.039413  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.039524  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.039647  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.039711  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.039764  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.040193  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.040575  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.040764  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.040768  122083 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.040837  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.040860  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.040931  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.040990  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.041340  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.041486  122083 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 23:02:12.041664  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.041752  122083 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 23:02:12.057960  122083 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0111 23:02:12.058084  122083 master.go:416] Enabling API group "authentication.k8s.io".
I0111 23:02:12.058120  122083 master.go:416] Enabling API group "authorization.k8s.io".
I0111 23:02:12.058353  122083 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.058514  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.058625  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.058746  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.058946  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.059854  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.059949  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.060161  122083 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 23:02:12.060302  122083 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 23:02:12.060443  122083 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.060961  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.060999  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.061082  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.061270  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.061669  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.061811  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.061890  122083 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 23:02:12.061912  122083 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 23:02:12.062590  122083 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.064096  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.064123  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.064183  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.064293  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.064691  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.064903  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.065155  122083 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0111 23:02:12.065208  122083 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 23:02:12.065264  122083 master.go:416] Enabling API group "autoscaling".
I0111 23:02:12.065593  122083 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.065773  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.065806  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.065933  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.066089  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.067243  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.067343  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.067608  122083 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0111 23:02:12.067683  122083 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 23:02:12.069512  122083 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.069618  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.069671  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.069718  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.069837  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.070537  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.070643  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.070868  122083 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0111 23:02:12.070997  122083 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 23:02:12.071086  122083 master.go:416] Enabling API group "batch".
I0111 23:02:12.071359  122083 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.071453  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.071466  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.071497  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.071606  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.072162  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.072296  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.072455  122083 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0111 23:02:12.072499  122083 master.go:416] Enabling API group "certificates.k8s.io".
I0111 23:02:12.072669  122083 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.072858  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.072927  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.073041  122083 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 23:02:12.073197  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.073292  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.073764  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.073824  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.074307  122083 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 23:02:12.074376  122083 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 23:02:12.074710  122083 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.074940  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.074964  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.075427  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.076207  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.077202  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.077303  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.077804  122083 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0111 23:02:12.077865  122083 master.go:416] Enabling API group "coordination.k8s.io".
I0111 23:02:12.078114  122083 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 23:02:12.079626  122083 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.079816  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.079879  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.079958  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.080090  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.081047  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.081204  122083 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0111 23:02:12.081242  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.081348  122083 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0111 23:02:12.081393  122083 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.081479  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.081502  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.081886  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.081952  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.082825  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.082904  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.083088  122083 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 23:02:12.083223  122083 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 23:02:12.083305  122083 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.083387  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.083401  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.084650  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.084699  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.085340  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.085484  122083 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 23:02:12.085626  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.085634  122083 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.085800  122083 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 23:02:12.086739  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.086803  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.087122  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.087486  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.088048  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.088432  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.088483  122083 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0111 23:02:12.088700  122083 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0111 23:02:12.089073  122083 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.089422  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.089514  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.089645  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.089788  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.090276  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.090484  122083 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 23:02:12.090663  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.090751  122083 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.090775  122083 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 23:02:12.090849  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.090930  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.091034  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.091106  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.092260  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.092390  122083 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 23:02:12.092531  122083 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.092603  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.092615  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.092665  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.092741  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.092769  122083 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 23:02:12.093001  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.093545  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.093651  122083 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 23:02:12.093669  122083 master.go:416] Enabling API group "extensions".
I0111 23:02:12.093809  122083 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.093877  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.093889  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.093915  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.093974  122083 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 23:02:12.093978  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.094131  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.094419  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.094539  122083 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0111 23:02:12.094576  122083 master.go:416] Enabling API group "networking.k8s.io".
I0111 23:02:12.094719  122083 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.094782  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.094793  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.094819  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.094904  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.094951  122083 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 23:02:12.095115  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.095408  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.095509  122083 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0111 23:02:12.095601  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.095631  122083 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.095661  122083 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 23:02:12.095690  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.095700  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.095748  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.095861  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.096279  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.096392  122083 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0111 23:02:12.096417  122083 master.go:416] Enabling API group "policy".
I0111 23:02:12.096445  122083 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.096519  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.096534  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.096561  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.096622  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.096645  122083 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0111 23:02:12.096762  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.096990  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.097096  122083 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 23:02:12.097263  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.097285  122083 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.097318  122083 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 23:02:12.097352  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.097364  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.097393  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.097490  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.097933  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.098061  122083 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 23:02:12.098089  122083 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.098192  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.098206  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.098234  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.098254  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.098284  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.098386  122083 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 23:02:12.098557  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.098627  122083 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 23:02:12.098721  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.098740  122083 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.098819  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.098833  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.098861  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.098901  122083 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 23:02:12.099013  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.103863  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.104328  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.104521  122083 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 23:02:12.104606  122083 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.104746  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.104770  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.104833  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.104928  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.105130  122083 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 23:02:12.107460  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.107746  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.107956  122083 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0111 23:02:12.108243  122083 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 23:02:12.109496  122083 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.111038  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.111164  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.111283  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.111426  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.112492  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.112731  122083 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0111 23:02:12.112782  122083 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.113064  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.112912  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.112956  122083 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 23:02:12.114352  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.114815  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.114882  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.115500  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.115624  122083 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0111 23:02:12.115693  122083 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 23:02:12.115805  122083 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.115898  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.115909  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.115938  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.115980  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.116463  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.116583  122083 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0111 23:02:12.116608  122083 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 23:02:12.117394  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.117478  122083 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 23:02:12.118619  122083 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.118701  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.118713  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.118770  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.118805  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.119673  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.119773  122083 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0111 23:02:12.119785  122083 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 23:02:12.119801  122083 master.go:408] Skipping disabled API group "settings.k8s.io".
I0111 23:02:12.119841  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.119915  122083 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 23:02:12.119951  122083 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.120039  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.120055  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.120085  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.120319  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.121368  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.121633  122083 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 23:02:12.121667  122083 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.121741  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.121753  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.121859  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.121980  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.121982  122083 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 23:02:12.122161  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.122425  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.122529  122083 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 23:02:12.122696  122083 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.122774  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.122806  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.122850  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.122952  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.122980  122083 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 23:02:12.123125  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.123418  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.123522  122083 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0111 23:02:12.123550  122083 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.123607  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.123617  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.123640  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.123713  122083 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 23:02:12.123744  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.123885  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.125669  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.125706  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.125815  122083 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0111 23:02:12.125840  122083 master.go:416] Enabling API group "storage.k8s.io".
I0111 23:02:12.125990  122083 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.126081  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.126101  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.126132  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.126213  122083 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 23:02:12.126721  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.127054  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.127172  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.127240  122083 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 23:02:12.127454  122083 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 23:02:12.127865  122083 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.127948  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.127961  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.127990  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.128099  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.128470  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.128521  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.128810  122083 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 23:02:12.128949  122083 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.129042  122083 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 23:02:12.129053  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.129211  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.129256  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.129302  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.129430  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.129709  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.129815  122083 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 23:02:12.129960  122083 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.130058  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.130072  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.130103  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.130193  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.130223  122083 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 23:02:12.130334  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.130556  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.130637  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.130655  122083 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 23:02:12.130673  122083 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 23:02:12.130807  122083 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.130874  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.130888  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.130907  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.130994  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.131876  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.132001  122083 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 23:02:12.132173  122083 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.132250  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.132262  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.132290  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.132305  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.132342  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.132454  122083 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 23:02:12.132756  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.132885  122083 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 23:02:12.132942  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.132992  122083 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 23:02:12.133003  122083 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.133103  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.133116  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.133202  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.133333  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.133648  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.133678  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.133742  122083 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 23:02:12.133878  122083 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.133944  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.133956  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.134100  122083 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 23:02:12.134752  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.134838  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.135392  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.135544  122083 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 23:02:12.135701  122083 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.135767  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.135803  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.135817  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.135845  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.135866  122083 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 23:02:12.136001  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.136517  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.136630  122083 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0111 23:02:12.136835  122083 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.136878  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.136937  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.136961  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.136990  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.137054  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.137153  122083 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 23:02:12.137611  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.137698  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.137738  122083 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0111 23:02:12.137849  122083 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.137935  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.137958  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.137996  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.138076  122083 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 23:02:12.138326  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.139366  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.139402  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.139493  122083 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0111 23:02:12.139617  122083 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 23:02:12.139698  122083 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.139798  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.139815  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.139847  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.139886  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.140204  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.140303  122083 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0111 23:02:12.140422  122083 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.140482  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.140493  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.140520  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.140591  122083 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 23:02:12.140734  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.141015  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.141126  122083 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0111 23:02:12.141173  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.141195  122083 master.go:416] Enabling API group "apps".
I0111 23:02:12.141229  122083 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.141260  122083 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 23:02:12.141287  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.141298  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.141325  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.141319  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.141389  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.141779  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.141921  122083 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0111 23:02:12.142045  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.142211  122083 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 23:02:12.141951  122083 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.142942  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.142983  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.143102  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.143266  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.143999  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.144097  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.144104  122083 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0111 23:02:12.144233  122083 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 23:02:12.144120  122083 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 23:02:12.144293  122083 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ac30fdc9-f83a-41cc-adc7-03d185201bf8", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0111 23:02:12.144552  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.144590  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.144633  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.144838  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.145373  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.145417  122083 store.go:1414] Monitoring events count at <storage-prefix>//events
I0111 23:02:12.145445  122083 master.go:416] Enabling API group "events.k8s.io".
I0111 23:02:12.146122  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 23:02:12.152955  122083 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0111 23:02:12.179305  122083 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 23:02:12.180041  122083 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 23:02:12.182660  122083 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 23:02:12.207485  122083 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0111 23:02:12.212535  122083 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:02:12.212584  122083 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0111 23:02:12.212595  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:12.212604  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:12.212611  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:12.212817  122083 wrap.go:47] GET /healthz: (364.201µs) 500
goroutine 71487 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0093651f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0093651f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0074727a0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc007d3ce60, 0xc004d71380, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0700)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc007d3ce60, 0xc0093e0700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01298faa0, 0xc00ae06c40, 0x604c5a0, 0xc007d3ce60, 0xc0093e0700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45390]
I0111 23:02:12.273791  122083 wrap.go:47] GET /api/v1/services: (1.541177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.278215  122083 wrap.go:47] GET /api/v1/services: (1.273299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.282654  122083 wrap.go:47] GET /api/v1/namespaces/default: (1.988561ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.295478  122083 wrap.go:47] POST /api/v1/namespaces: (9.555786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.303707  122083 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (7.785434ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.308877  122083 wrap.go:47] POST /api/v1/namespaces/default/services: (4.250035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.310445  122083 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.039632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.312963  122083 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (2.176151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.314906  122083 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:02:12.314933  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:12.314942  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:12.314950  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:12.315276  122083 wrap.go:47] GET /healthz: (464.779µs) 500
goroutine 71168 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e3f5030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e3f5030, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc007e85a00, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc0077cbf78, 0xc0032f3e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98200)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc0077cbf78, 0xc00ea98200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00f35e960, 0xc00ae06c40, 0x604c5a0, 0xc0077cbf78, 0xc00ea98200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45404]
I0111 23:02:12.316751  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.750906ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.318634  122083 wrap.go:47] GET /api/v1/services: (2.085848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:12.318935  122083 wrap.go:47] GET /api/v1/namespaces/default: (2.90379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45404]
I0111 23:02:12.318954  122083 wrap.go:47] GET /api/v1/services: (2.019459ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45408]
I0111 23:02:12.319291  122083 wrap.go:47] POST /api/v1/namespaces: (2.069194ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.321007  122083 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.136582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:12.321112  122083 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.050089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45390]
I0111 23:02:12.323484  122083 wrap.go:47] POST /api/v1/namespaces: (1.862329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:12.323814  122083 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.319392ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:12.325280  122083 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.023231ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:12.327081  122083 wrap.go:47] POST /api/v1/namespaces: (1.500643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:12.413767  122083 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:02:12.413856  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:12.413885  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:12.413938  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:12.414231  122083 wrap.go:47] GET /healthz: (570.488µs) 500
goroutine 71842 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc009365e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc009365e30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003d20480, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc007d3cf60, 0xc001613200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1900)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc007d3cf60, 0xc0093e1900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012e883c0, 0xc00ae06c40, 0x604c5a0, 0xc007d3cf60, 0xc0093e1900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:12.513779  122083 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:02:12.513822  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:12.513833  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:12.513840  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:12.514046  122083 wrap.go:47] GET /healthz: (377.785µs) 500
goroutine 71704 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e832690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e832690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0075425c0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc001021568, 0xc005452f00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc001021568, 0xc011d97800)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc001021568, 0xc011d97800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc001021568, 0xc011d97800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc001021568, 0xc011d97800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc001021568, 0xc011d97800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc001021568, 0xc011d97800)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc001021568, 0xc011d97800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc001021568, 0xc011d97800)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc001021568, 0xc011d97800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc001021568, 0xc011d97800)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc001021568, 0xc011d97800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc001021568, 0xc011d97700)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc001021568, 0xc011d97700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01287eae0, 0xc00ae06c40, 0x604c5a0, 0xc001021568, 0xc011d97700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:12.614488  122083 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:02:12.614515  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:12.614523  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:12.614528  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:12.614687  122083 wrap.go:47] GET /healthz: (294.289µs) 500
goroutine 71828 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e5d9110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e5d9110, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003cb7360, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc003cb15b0, 0xc00f3b2c00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc003cb15b0, 0xc00f819300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc003cb15b0, 0xc00f819200)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc003cb15b0, 0xc00f819200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012f04720, 0xc00ae06c40, 0x604c5a0, 0xc003cb15b0, 0xc00f819200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:12.713762  122083 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:02:12.713797  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:12.713806  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:12.713814  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:12.713999  122083 wrap.go:47] GET /healthz: (358.182µs) 500
goroutine 71706 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e832770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e832770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc007542c00, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc001021590, 0xc005453500, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc001021590, 0xc011d97e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc001021590, 0xc011d97d00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc001021590, 0xc011d97d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01287ecc0, 0xc00ae06c40, 0x604c5a0, 0xc001021590, 0xc011d97d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:12.813768  122083 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:02:12.813834  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:12.813846  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:12.813861  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:12.814067  122083 wrap.go:47] GET /healthz: (413.805µs) 500
goroutine 71830 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e5d9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e5d9340, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003cb7f40, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc003cb1638, 0xc00f3b3200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc003cb1638, 0xc00f819b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc003cb1638, 0xc00f819a00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc003cb1638, 0xc00f819a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012f04c00, 0xc00ae06c40, 0x604c5a0, 0xc003cb1638, 0xc00f819a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:12.913745  122083 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0111 23:02:12.913776  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:12.913786  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:12.913794  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:12.913995  122083 wrap.go:47] GET /healthz: (362.03µs) 500
goroutine 71805 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00f068c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00f068c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c807a0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc003ae5e58, 0xc003777e00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4200)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc003ae5e58, 0xc0112a4200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012ebd320, 0xc00ae06c40, 0x604c5a0, 0xc003ae5e58, 0xc0112a4200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:12.995675  122083 clientconn.go:551] parsed scheme: ""
I0111 23:02:12.995706  122083 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 23:02:12.995752  122083 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 23:02:12.995831  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:12.996260  122083 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 23:02:12.996338  122083 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 23:02:13.014620  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.014642  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:13.014650  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:13.014852  122083 wrap.go:47] GET /healthz: (1.171906ms) 500
goroutine 71859 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e5d9650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e5d9650, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c70a20, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc003cb16f8, 0xc00a69f340, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc003cb16f8, 0xc01135a200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc003cb16f8, 0xc01135a100)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc003cb16f8, 0xc01135a100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012f05c20, 0xc00ae06c40, 0x604c5a0, 0xc003cb16f8, 0xc01135a100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:13.114603  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.114628  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:13.114636  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:13.115013  122083 wrap.go:47] GET /healthz: (1.386483ms) 500
goroutine 71861 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e5d9880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e5d9880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003c70e80, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc003cb1718, 0xc009bb5340, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc003cb1718, 0xc01135a700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc003cb1718, 0xc01135a600)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc003cb1718, 0xc01135a600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0145f6360, 0xc00ae06c40, 0x604c5a0, 0xc003cb1718, 0xc01135a600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:13.217551  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.217587  122083 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 23:02:13.217596  122083 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 23:02:13.217789  122083 wrap.go:47] GET /healthz: (4.061066ms) 500
goroutine 71713 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e832d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e832d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc007543d60, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc001021618, 0xc009bb5600, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc001021618, 0xc0112f0a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc001021618, 0xc0112f0900)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc001021618, 0xc0112f0900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01287fd40, 0xc00ae06c40, 0x604c5a0, 0xc001021618, 0xc0112f0900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45678]
I0111 23:02:13.218064  122083 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (5.620153ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.218463  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (5.942234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:13.220538  122083 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.460954ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:13.220945  122083 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.853639ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.221366  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.960978ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45676]
I0111 23:02:13.221532  122083 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 23:02:13.224197  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.353188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45678]
I0111 23:02:13.224682  122083 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (3.789151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:13.225037  122083 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (3.164588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.225979  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (991.798µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45678]
I0111 23:02:13.227293  122083 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.187629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.227486  122083 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 23:02:13.227501  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (769.896µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45678]
I0111 23:02:13.227509  122083 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 23:02:13.228835  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (972.86µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.229980  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (735.573µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.231012  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (702.153µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.232107  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (664.48µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.232989  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (586.755µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.235492  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.809954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.235671  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 23:02:13.236548  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (748.274µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.238592  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.752321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.238775  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 23:02:13.239758  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (851.854µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.241969  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.882826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.242202  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 23:02:13.243044  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (692.573µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.244962  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.245176  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 23:02:13.246004  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (707.723µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.255707  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.968725ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.255916  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 23:02:13.257235  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.142747ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.259633  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.067851ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.263054  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 23:02:13.266303  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.082137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.275329  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.722857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.275691  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 23:02:13.276833  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (846.569µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.279159  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.89113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.279424  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 23:02:13.292074  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (12.430676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.295510  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.918234ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.295792  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 23:02:13.297307  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.308067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.299730  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.984309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.300373  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 23:02:13.301632  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.092921ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.304267  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.153908ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.304566  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 23:02:13.306096  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.370078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.309121  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.656996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.309360  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 23:02:13.311203  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.663802ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.316372  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.316567  122083 wrap.go:47] GET /healthz: (1.63169ms) 500
goroutine 71944 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01294a230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01294a230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc003ab3860, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc001021800, 0xc001e1b900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc001021800, 0xc012a04a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc001021800, 0xc012a04900)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc001021800, 0xc012a04900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012af6ae0, 0xc00ae06c40, 0x604c5a0, 0xc001021800, 0xc012a04900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:13.319747  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (8.020324ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.320088  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 23:02:13.321209  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (928.157µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.324867  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.154956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.325542  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 23:02:13.327003  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.169232ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.329420  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.941771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.329862  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 23:02:13.330979  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (873.716µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.333234  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.820334ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.333582  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 23:02:13.335204  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (998.933µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.337637  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.814752ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.337894  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 23:02:13.339538  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.359489ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.342459  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.169657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.343130  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 23:02:13.344697  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.250496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.357424  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (11.481711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.357786  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 23:02:13.360579  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (2.441918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.363176  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.97829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.363457  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 23:02:13.364770  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.130588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.368423  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.245424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.368710  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 23:02:13.370765  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (1.715654ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.375083  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.740133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.375318  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 23:02:13.376659  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (1.143961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.378481  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.240627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.378629  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 23:02:13.380123  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.171613ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.382211  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.744115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.382891  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 23:02:13.384087  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (851.261µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.386803  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.374749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.386997  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 23:02:13.388376  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.128888ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.391230  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.373985ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.391437  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 23:02:13.392486  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (869.519µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.394933  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.039728ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.395200  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 23:02:13.396301  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (886.15µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.398743  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.158371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.398992  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 23:02:13.400078  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (860.061µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.402257  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.749946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.402492  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 23:02:13.404101  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.358736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.406360  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.810111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.406627  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 23:02:13.408925  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (2.052294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.412038  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.582816ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.414060  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 23:02:13.414614  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.414805  122083 wrap.go:47] GET /healthz: (1.067577ms) 500
goroutine 71856 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013d16310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013d16310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0033f65c0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc007d3d500, 0xc00ed5ca00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc007d3d500, 0xc011f2d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc007d3d500, 0xc011f2d400)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc007d3d500, 0xc011f2d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0137ceea0, 0xc00ae06c40, 0x604c5a0, 0xc007d3d500, 0xc011f2d400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:13.415258  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (971.553µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.419526  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.743627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.419903  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 23:02:13.422812  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.504134ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.428351  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.172727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.428623  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 23:02:13.436635  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (7.114048ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.442866  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.372023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.443701  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 23:02:13.445551  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.565595ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.447598  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.721186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.447862  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 23:02:13.449251  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.238803ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.451009  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.247043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.451249  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 23:02:13.452532  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (887.076µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.454838  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.617156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.455497  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 23:02:13.457039  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.276814ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.461054  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.314304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.461660  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 23:02:13.463154  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.225136ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.465476  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.529526ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.465804  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 23:02:13.469283  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (3.172679ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.472824  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.017295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.473231  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 23:02:13.475371  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.832251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.477809  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.022899ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.478173  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 23:02:13.480087  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.591311ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.485119  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.523569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.485322  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 23:02:13.486594  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (972.664µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.493751  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.629981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.494643  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 23:02:13.496357  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.380968ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.498665  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.839894ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
E0111 23:02:13.498821  122083 storage_rbac.go:193] unable to reconcile clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller: 0-length response with status code: 200 and content type: 
I0111 23:02:13.500063  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (966.921µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.502331  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.841182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.502574  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 23:02:13.504103  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.256533ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.506631  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.079813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.507069  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 23:02:13.508393  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.032182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.510454  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.700354ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.510816  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 23:02:13.511932  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (738.576µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.514626  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.337138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.514805  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.514860  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 23:02:13.515122  122083 wrap.go:47] GET /healthz: (1.310825ms) 500
goroutine 72146 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0138813b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0138813b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002e421a0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc002ab4968, 0xc005747180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc002ab4968, 0xc01346d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc002ab4968, 0xc01346d200)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc002ab4968, 0xc01346d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014eacea0, 0xc00ae06c40, 0x604c5a0, 0xc002ab4968, 0xc01346d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:13.515775  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (750.581µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.517826  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.593466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.518007  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 23:02:13.519099  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (925.346µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.521505  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.882819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.522204  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 23:02:13.523169  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (818.754µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.525433  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.930395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.525929  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 23:02:13.527090  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (844.224µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.529517  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.729092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.529753  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 23:02:13.531063  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.107785ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.532976  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.377374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.533396  122083 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 23:02:13.534460  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (803.484µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.537247  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.339535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.537513  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 23:02:13.554933  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.508814ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.575893  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.523221ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.576214  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 23:02:13.594497  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.066143ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.615698  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.327498ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.615943  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 23:02:13.618269  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.618471  122083 wrap.go:47] GET /healthz: (1.268713ms) 500
goroutine 72157 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016b42af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016b42af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002ca6220, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc002ab5008, 0xc002beb2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc002ab5008, 0xc011ba3900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc002ab5008, 0xc011ba3800)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc002ab5008, 0xc011ba3800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014f84de0, 0xc00ae06c40, 0x604c5a0, 0xc002ab5008, 0xc011ba3800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:13.637212  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (3.763717ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.655659  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.077676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.655914  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 23:02:13.674390  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.023377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.695654  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.314534ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.695991  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 23:02:13.714948  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.577606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.715112  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.715317  122083 wrap.go:47] GET /healthz: (1.403995ms) 500
goroutine 72192 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011843a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011843a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001cf4e60, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc00316ead8, 0xc002beb7c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc00316ead8, 0xc00f17c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc00316ead8, 0xc00f17c200)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc00316ead8, 0xc00f17c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012136960, 0xc00ae06c40, 0x604c5a0, 0xc00316ead8, 0xc00f17c200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:13.735812  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.403942ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.736251  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 23:02:13.755101  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.738565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.776643  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.26034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.776897  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 23:02:13.794489  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.132538ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.815260  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.896796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.815573  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 23:02:13.815606  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.815800  122083 wrap.go:47] GET /healthz: (1.590491ms) 500
goroutine 72210 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016b43260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016b43260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0018d8be0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc002ab5218, 0xc003c4ac80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc002ab5218, 0xc0130e4a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc002ab5218, 0xc0130e4900)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc002ab5218, 0xc0130e4900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc014f85980, 0xc00ae06c40, 0x604c5a0, 0xc002ab5218, 0xc0130e4900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:13.835403  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.076548ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:13.855413  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.046877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:13.855803  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 23:02:13.874507  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.058181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:13.895203  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.889076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:13.895491  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 23:02:13.914800  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.493446ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:13.915427  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:13.915627  122083 wrap.go:47] GET /healthz: (1.296338ms) 500
goroutine 72229 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007b37260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007b37260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000a63d60, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc0054ae990, 0xc00ed5cf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc0054ae990, 0xc00c99db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc0054ae990, 0xc00c99da00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc0054ae990, 0xc00c99da00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc009992180, 0xc00ae06c40, 0x604c5a0, 0xc0054ae990, 0xc00c99da00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:13.935538  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.16357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.935765  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 23:02:13.954455  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.117708ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.975613  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.255533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:13.975826  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 23:02:13.994548  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.199498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.015873  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.016124  122083 wrap.go:47] GET /healthz: (1.385574ms) 500
goroutine 72077 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0118ba770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0118ba770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc000f0b220, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc007d3db00, 0xc0049f1900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc007d3db00, 0xc007aabf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc007d3db00, 0xc007aabe00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc007d3db00, 0xc007aabe00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b0ffa40, 0xc00ae06c40, 0x604c5a0, 0xc007d3db00, 0xc007aabe00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:14.016471  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.079223ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.016683  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 23:02:14.034562  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.225941ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.055447  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.067798ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.055708  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 23:02:14.075183  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.70045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.095804  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.435428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.096184  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 23:02:14.114508  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.114690  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.293518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.114704  122083 wrap.go:47] GET /healthz: (1.087572ms) 500
goroutine 72249 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012d38690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012d38690, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001047260, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc002017980, 0xc01770c3c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc002017980, 0xc012c82f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc002017980, 0xc012c82e00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc002017980, 0xc012c82e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e57cf00, 0xc00ae06c40, 0x604c5a0, 0xc002017980, 0xc012c82e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:14.135727  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.337295ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.136497  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 23:02:14.154350  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.050848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.176539  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.235232ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.176866  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 23:02:14.194638  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.228177ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.215499  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.215700  122083 wrap.go:47] GET /healthz: (2.054449ms) 500
goroutine 72253 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012d38e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012d38e00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0020e51c0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc002017a80, 0xc01770c780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc002017a80, 0xc014852100)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc002017a80, 0xc014852100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc002017a80, 0xc014852100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc002017a80, 0xc014852100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc002017a80, 0xc014852100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc002017a80, 0xc014852100)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc002017a80, 0xc014852100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc002017a80, 0xc014852100)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc002017a80, 0xc014852100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc002017a80, 0xc014852100)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc002017a80, 0xc014852100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc002017a80, 0xc014852000)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc002017a80, 0xc014852000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e57dec0, 0xc00ae06c40, 0x604c5a0, 0xc002017a80, 0xc014852000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:14.215819  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.180233ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.216009  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 23:02:14.234515  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.174249ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.259382  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.461251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.259639  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 23:02:14.275158  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.382618ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.298428  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.910894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.298669  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 23:02:14.315232  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.315521  122083 wrap.go:47] GET /healthz: (1.68471ms) 500
goroutine 72276 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016975ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016975ab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0021f5020, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc0021cabe0, 0xc01770cc80, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc0021cabe0, 0xc011de3b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc0021cabe0, 0xc011de3a00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc0021cabe0, 0xc011de3a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ee8d920, 0xc00ae06c40, 0x604c5a0, 0xc0021cabe0, 0xc011de3a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:14.315765  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (2.394069ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.335733  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.226665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.336042  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 23:02:14.354545  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.176607ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.375620  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.264859ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.376091  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 23:02:14.394449  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.156992ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.415820  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.43451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.415992  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.416228  122083 wrap.go:47] GET /healthz: (2.632708ms) 500
goroutine 72306 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0145f8000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0145f8000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002338620, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc00316f070, 0xc0000763c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc00316f070, 0xc0152ec500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc00316f070, 0xc0152ec400)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc00316f070, 0xc0152ec400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0119861e0, 0xc00ae06c40, 0x604c5a0, 0xc00316f070, 0xc0152ec400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:14.416570  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 23:02:14.434696  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.278695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.455504  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.076552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.455720  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 23:02:14.474817  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.415577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.495894  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.432593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.496211  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 23:02:14.516719  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (2.462949ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.517962  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.518188  122083 wrap.go:47] GET /healthz: (4.321183ms) 500
goroutine 72313 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0145f8af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0145f8af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002530be0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc00316f200, 0xc01770d2c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc00316f200, 0xc0152ed800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc00316f200, 0xc0152ed700)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc00316f200, 0xc0152ed700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011987020, 0xc00ae06c40, 0x604c5a0, 0xc00316f200, 0xc0152ed700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:14.535680  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.345044ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.535940  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 23:02:14.554415  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.116655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.579104  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.21208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.579456  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 23:02:14.594683  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.127055ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.616476  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.616814  122083 wrap.go:47] GET /healthz: (1.143485ms) 500
goroutine 72263 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0118bb810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0118bb810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc001111d40, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc007d3de08, 0xc00ed5d400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc007d3de08, 0xc00c81f300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc007d3de08, 0xc00c81f200)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc007d3de08, 0xc00c81f200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e7406c0, 0xc00ae06c40, 0x604c5a0, 0xc007d3de08, 0xc00c81f200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:14.627818  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (12.157935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.628295  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 23:02:14.636095  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.713474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.656127  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.718853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.656756  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 23:02:14.674580  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.233631ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.695680  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.377085ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.695932  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 23:02:14.719912  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.720017  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.760465ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.720178  122083 wrap.go:47] GET /healthz: (2.582009ms) 500
goroutine 72320 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0145f96c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0145f96c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0027bb4e0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc00316f578, 0xc005747540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc00316f578, 0xc00b983100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc00316f578, 0xc00b983000)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc00316f578, 0xc00b983000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011987b60, 0xc00ae06c40, 0x604c5a0, 0xc00316f578, 0xc00b983000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:14.735665  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.351023ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.736363  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 23:02:14.757331  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.382242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.776648  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.778743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.776933  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 23:02:14.795524  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.840309ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.817252  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.817508  122083 wrap.go:47] GET /healthz: (3.936703ms) 500
goroutine 72355 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00e45d8f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00e45d8f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00288bd80, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc0021cb0e8, 0xc005747a40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0ae00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc0021cb0e8, 0xc003c0ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017246960, 0xc00ae06c40, 0x604c5a0, 0xc0021cb0e8, 0xc003c0ae00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:14.818325  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.460002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.818581  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 23:02:14.834469  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.168116ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.855529  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.145977ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.855856  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 23:02:14.874943  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.609731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.895833  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.486993ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.896161  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 23:02:14.914621  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.210345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:14.914662  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:14.914897  122083 wrap.go:47] GET /healthz: (1.219608ms) 500
goroutine 72370 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00553a770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00553a770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc002972e40, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc009b20190, 0xc003c4b180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc009b20190, 0xc0023a9e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc009b20190, 0xc0023a9d00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc009b20190, 0xc0023a9d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0177902a0, 0xc00ae06c40, 0x604c5a0, 0xc009b20190, 0xc0023a9d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:14.935517  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.149973ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.935755  122083 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 23:02:14.954650  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.272066ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.956471  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.461811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.977556  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.062096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.977805  122083 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 23:02:14.995038  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.118621ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:14.996710  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.262893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.016982  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:15.017234  122083 wrap.go:47] GET /healthz: (2.424923ms) 500
goroutine 72350 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00af585b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00af585b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00299f5a0, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc00316f798, 0xc003c4b540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc00316f798, 0xc007fd3200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc00316f798, 0xc007fd3100)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc00316f798, 0xc007fd3100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0172af440, 0xc00ae06c40, 0x604c5a0, 0xc00316f798, 0xc007fd3100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:15.018862  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.806008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.019215  122083 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 23:02:15.035732  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.334058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.038056  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.787521ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.055500  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.125881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.055774  122083 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 23:02:15.076519  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (3.004058ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.079530  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (2.411593ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.095740  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.325961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.096545  122083 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 23:02:15.115101  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:15.115235  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.84812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.115338  122083 wrap.go:47] GET /healthz: (1.776455ms) 500
goroutine 72386 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc007f5f1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc007f5f1f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0035f1040, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc0021cb400, 0xc008db7900, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc0021cb400, 0xc00ae51e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc0021cb400, 0xc00ae51d00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc0021cb400, 0xc00ae51d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc017739e00, 0xc00ae06c40, 0x604c5a0, 0xc0021cb400, 0xc00ae51d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:15.118993  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.116067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.137544  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.173053ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.138045  122083 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 23:02:15.155002  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.458405ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.157118  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.671858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.176997  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.547216ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.177307  122083 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 23:02:15.196736  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (3.367672ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.198863  122083 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.499324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.217581  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:15.217610  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (4.271186ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.218417  122083 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 23:02:15.218761  122083 wrap.go:47] GET /healthz: (4.500205ms) 500
goroutine 72412 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc011c9f420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc011c9f420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00921ee80, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc005471d88, 0xc007e8e780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc005471d88, 0xc0123f0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc005471d88, 0xc00f31bf00)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc005471d88, 0xc00f31bf00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012ab7b00, 0xc00ae06c40, 0x604c5a0, 0xc005471d88, 0xc00f31bf00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45402]
I0111 23:02:15.234595  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.23716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.236418  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.347043ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.255878  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.544836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.256599  122083 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 23:02:15.276181  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.284233ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.277951  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.292673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.296101  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.609558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.296431  122083 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 23:02:15.315672  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (2.275259ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.315811  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:15.316039  122083 wrap.go:47] GET /healthz: (1.773832ms) 500
goroutine 72427 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc01300cc40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc01300cc40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc009339960, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc00de265d0, 0xc000076dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc00de265d0, 0xc016846300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc00de265d0, 0xc016846200)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc00de265d0, 0xc016846200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc016736240, 0xc00ae06c40, 0x604c5a0, 0xc00de265d0, 0xc016846200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:15.318362  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.796733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.336117  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.748562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.336376  122083 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 23:02:15.354538  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.15697ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.356298  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.391848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.375810  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.441214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.376096  122083 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 23:02:15.394472  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.117476ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.396238  122083 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.348816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.415563  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.197111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.416119  122083 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 23:02:15.416216  122083 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 23:02:15.416329  122083 wrap.go:47] GET /healthz: (1.216537ms) 500
goroutine 72466 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0141a8850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0141a8850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0092d9940, 0x1f4)
net/http.Error(0x7f66c51d3078, 0xc0054af7f0, 0xc0000772c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
net/http.HandlerFunc.ServeHTTP(0xc003d9c380, 0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0029f9900, 0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc009414850, 0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e84da, 0xe, 0xc00b782d80, 0xc009414850, 0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
net/http.HandlerFunc.ServeHTTP(0xc00a82a540, 0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
net/http.HandlerFunc.ServeHTTP(0xc007d1f560, 0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
net/http.HandlerFunc.ServeHTTP(0xc00a82a580, 0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca400)
net/http.HandlerFunc.ServeHTTP(0xc00a1a15e0, 0x7f66c51d3078, 0xc0054af7f0, 0xc0139ca400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01733d140, 0xc00ae06c40, 0x604c5a0, 0xc0054af7f0, 0xc0139ca400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:45406]
I0111 23:02:15.434484  122083 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (931.971µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.436038  122083 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.0928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.456035  122083 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.400649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.456377  122083 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 23:02:15.515208  122083 wrap.go:47] GET /healthz: (1.543385ms) 200 [Go-http-client/1.1 127.0.0.1:45406]
W0111 23:02:15.515942  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.515987  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.516013  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.516043  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.516060  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.516074  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.516086  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.516097  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.516107  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 23:02:15.516128  122083 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 23:02:15.516374  122083 factory.go:745] Creating scheduler from algorithm provider 'DefaultProvider'
I0111 23:02:15.516396  122083 factory.go:826] Creating scheduler with fit predicates 'map[NoDiskConflict:{} CheckNodeDiskPressure:{} CheckNodePIDPressure:{} CheckNodeCondition:{} NoVolumeZoneConflict:{} MaxAzureDiskVolumeCount:{} MatchInterPodAffinity:{} MaxEBSVolumeCount:{} CheckNodeMemoryPressure:{} MaxCSIVolumeCountPred:{} CheckVolumeBinding:{} MaxGCEPDVolumeCount:{} GeneralPredicates:{} PodToleratesNodeTaints:{}]' and priority functions 'map[TaintTolerationPriority:{} ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{}]'
I0111 23:02:15.516502  122083 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0111 23:02:15.516735  122083 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 23:02:15.516748  122083 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0111 23:02:15.517792  122083 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (776.649µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:15.518558  122083 get.go:251] Starting watch for /api/v1/pods, rv=24904 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m43s
I0111 23:02:15.616675  122083 shared_informer.go:123] caches populated
I0111 23:02:15.616709  122083 controller_utils.go:1028] Caches are synced for scheduler controller
I0111 23:02:15.617194  122083 reflector.go:131] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.617217  122083 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.617611  122083 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.617627  122083 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.618061  122083 reflector.go:131] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.618081  122083 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.618459  122083 reflector.go:131] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.618477  122083 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.618804  122083 reflector.go:131] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.618838  122083 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.619245  122083 reflector.go:131] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.619271  122083 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.619597  122083 reflector.go:131] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.619622  122083 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.619955  122083 reflector.go:131] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.619981  122083 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.620336  122083 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.620380  122083 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0111 23:02:15.622181  122083 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (694.69µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:15.622730  122083 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (453.252µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46012]
I0111 23:02:15.622852  122083 get.go:251] Starting watch for /api/v1/nodes, rv=24904 labels= fields= timeout=5m13s
I0111 23:02:15.623217  122083 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (398.541µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46004]
I0111 23:02:15.623612  122083 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (315.775µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46006]
I0111 23:02:15.623954  122083 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (373.247µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46016]
I0111 23:02:15.624059  122083 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (372.798µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46008]
I0111 23:02:15.624514  122083 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (342.595µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46010]
I0111 23:02:15.624860  122083 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=24904 labels= fields= timeout=8m6s
I0111 23:02:15.625304  122083 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=24904 labels= fields= timeout=8m59s
I0111 23:02:15.625917  122083 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=24904 labels= fields= timeout=7m47s
I0111 23:02:15.626057  122083 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=24904 labels= fields= timeout=7m31s
I0111 23:02:15.626383  122083 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (2.319454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46014]
I0111 23:02:15.626437  122083 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=24904 labels= fields= timeout=6m40s
I0111 23:02:15.626442  122083 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=24904 labels= fields= timeout=6m16s
I0111 23:02:15.627097  122083 get.go:251] Starting watch for /api/v1/services, rv=24911 labels= fields= timeout=8m48s
I0111 23:02:15.627646  122083 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (530.046µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0111 23:02:15.628348  122083 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=24904 labels= fields= timeout=7m58s
I0111 23:02:15.717086  122083 shared_informer.go:123] caches populated
I0111 23:02:15.817278  122083 shared_informer.go:123] caches populated
I0111 23:02:15.917522  122083 shared_informer.go:123] caches populated
I0111 23:02:16.017768  122083 shared_informer.go:123] caches populated
I0111 23:02:16.118080  122083 shared_informer.go:123] caches populated
I0111 23:02:16.218245  122083 shared_informer.go:123] caches populated
I0111 23:02:16.318941  122083 shared_informer.go:123] caches populated
I0111 23:02:16.419177  122083 shared_informer.go:123] caches populated
I0111 23:02:16.519346  122083 shared_informer.go:123] caches populated
I0111 23:02:16.619542  122083 shared_informer.go:123] caches populated
I0111 23:02:16.622941  122083 wrap.go:47] POST /api/v1/nodes: (2.683891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.623439  122083 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:02:16.623545  122083 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:02:16.623664  122083 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:02:16.625768  122083 wrap.go:47] POST /api/v1/nodes: (2.287113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.626946  122083 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:02:16.627186  122083 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0111 23:02:16.629452  122083 wrap.go:47] POST /api/v1/nodes: (3.052823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.632259  122083 wrap.go:47] POST /api/v1/nodes: (2.269802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.635355  122083 wrap.go:47] POST /api/v1/nodes: (2.52258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.638627  122083 wrap.go:47] POST /api/v1/nodes: (2.724666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.641717  122083 wrap.go:47] POST /api/v1/nodes: (2.342839ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.644592  122083 wrap.go:47] POST /api/v1/nodes: (1.907453ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.647652  122083 wrap.go:47] POST /api/v1/nodes: (2.600246ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.649906  122083 wrap.go:47] POST /api/v1/nodes: (1.706811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.652503  122083 wrap.go:47] POST /api/v1/nodes: (1.926923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.654930  122083 wrap.go:47] POST /api/v1/namespaces/image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pods: (1.996619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.655356  122083 scheduling_queue.go:821] About to try and schedule pod image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pod-using-large-image
I0111 23:02:16.655370  122083 scheduler.go:454] Attempting to schedule pod: image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pod-using-large-image
I0111 23:02:16.655748  122083 scheduler_binder.go:211] AssumePodVolumes for pod "image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pod-using-large-image", node "testnode-0"
I0111 23:02:16.655764  122083 scheduler_binder.go:221] AssumePodVolumes for pod "image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pod-using-large-image", node "testnode-0": all PVCs bound and nothing to do
I0111 23:02:16.655817  122083 factory.go:1166] Attempting to bind pod-using-large-image to testnode-0
I0111 23:02:16.658008  122083 wrap.go:47] POST /api/v1/namespaces/image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pods/pod-using-large-image/binding: (1.915834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.658265  122083 scheduler.go:569] pod image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pod-using-large-image is bound successfully on node testnode-0, 10 nodes evaluated, 10 nodes were found feasible
I0111 23:02:16.660862  122083 wrap.go:47] POST /api/v1/namespaces/image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/events: (2.22832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.758571  122083 wrap.go:47] GET /api/v1/namespaces/image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pods/pod-using-large-image: (2.33102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.760622  122083 wrap.go:47] GET /api/v1/namespaces/image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pods/pod-using-large-image: (1.588039ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.761275  122083 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=24904&timeoutSeconds=583&watch=true: (1.243097297s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45406]
I0111 23:02:16.761523  122083 wrap.go:47] GET /api/v1/nodes?resourceVersion=24904&timeout=5m13s&timeoutSeconds=313&watch=true: (1.138945024s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:45402]
I0111 23:02:16.761816  122083 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=24904&timeout=8m6s&timeoutSeconds=486&watch=true: (1.137220367s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46012]
I0111 23:02:16.761886  122083 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=24904&timeout=7m47s&timeoutSeconds=467&watch=true: (1.136183467s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46008]
I0111 23:02:16.761894  122083 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=24904&timeout=8m59s&timeoutSeconds=539&watch=true: (1.13684779s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46004]
I0111 23:02:16.761917  122083 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=24904&timeout=6m40s&timeoutSeconds=400&watch=true: (1.135690941s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46016]
I0111 23:02:16.761986  122083 wrap.go:47] GET /api/v1/services?resourceVersion=24911&timeout=8m48s&timeoutSeconds=528&watch=true: (1.135136231s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46014]
I0111 23:02:16.761994  122083 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=24904&timeout=6m16s&timeoutSeconds=376&watch=true: (1.135785562s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46006]
E0111 23:02:16.762085  122083 scheduling_queue.go:824] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0111 23:02:16.762087  122083 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=24904&timeout=7m58s&timeoutSeconds=478&watch=true: (1.133962866s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46018]
I0111 23:02:16.762107  122083 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=24904&timeout=7m31s&timeoutSeconds=451&watch=true: (1.136250047s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46010]
I0111 23:02:16.793210  122083 wrap.go:47] DELETE /api/v1/nodes: (31.94007ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.793520  122083 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0111 23:02:16.796505  122083 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.574537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
I0111 23:02:16.799839  122083 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.814023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:46062]
priorities_test.go:214: pod pod-using-large-image got scheduled on an unexpected node: testnode-0. Expected node: testnode-large-image.
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-225622.xml

Find image-localityf00ddbf9-15f4-11e9-8dd6-0242ac110002/pod-using-large-image mentions in log files


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0111 22:42:48.051] process 217 exited with code 0 after 0.0m
I0111 22:42:48.052] Call:  gcloud config get-value account
I0111 22:42:48.345] process 229 exited with code 0 after 0.0m
I0111 22:42:48.346] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 22:42:48.346] Call:  kubectl get -oyaml pods/1c0a4f3b-15f2-11e9-a282-0a580a6c019f
W0111 22:42:48.557] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0111 22:42:48.560] Command failed
I0111 22:42:48.561] process 241 exited with code 1 after 0.0m
E0111 22:42:48.561] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/1c0a4f3b-15f2-11e9-a282-0a580a6c019f']' returned non-zero exit status 1
I0111 22:42:48.561] Root: /workspace
I0111 22:42:48.561] cd to /workspace
I0111 22:42:48.561] Checkout: /workspace/k8s.io/kubernetes master:08bee2cc8453c50c6d632634e9ceffe05bf8d4ba,72682:d52ba6413dac9b5441ee6babb01df56c0d0a2c39,72714:d0b35d1b05bdeacbb5e4f0f42decf7f977d323a1,72797:28a6a446a14d064d8a85c3e59b3c77f2127be35b,72831:f62cc81934634433eb8c7dbfc5bf755247a8efeb to /workspace/k8s.io/kubernetes
I0111 22:42:48.561] Call:  git init k8s.io/kubernetes
... skipping 823 lines ...
W0111 22:51:25.955] I0111 22:51:25.955263   56181 deprecated_insecure_serving.go:51] Serving insecurely on [::]:10252
W0111 22:51:25.956] I0111 22:51:25.955430   56181 leaderelection.go:210] attempting to acquire leader lease  kube-system/kube-controller-manager...
W0111 22:51:25.964] I0111 22:51:25.964511   56181 leaderelection.go:220] successfully acquired lease kube-system/kube-controller-manager
W0111 22:51:25.970] I0111 22:51:25.969724   56181 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"6ce37d60-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"148", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 75608445fe0b_6ce2dbfe-15f3-11e9-ac1a-0242ac110002 became leader
W0111 22:51:26.025] I0111 22:51:26.024784   56181 plugins.go:103] No cloud provider specified.
W0111 22:51:26.025] W0111 22:51:26.024841   56181 controllermanager.go:536] "serviceaccount-token" is disabled because there is no private key
W0111 22:51:26.026] E0111 22:51:26.025694   56181 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0111 22:51:26.026] W0111 22:51:26.025721   56181 controllermanager.go:508] Skipping "service"
W0111 22:51:26.026] I0111 22:51:26.026368   56181 controllermanager.go:516] Started "persistentvolume-expander"
W0111 22:51:26.027] I0111 22:51:26.026473   56181 expand_controller.go:153] Starting expand controller
W0111 22:51:26.027] I0111 22:51:26.026492   56181 controller_utils.go:1021] Waiting for caches to sync for expand controller
W0111 22:51:26.027] I0111 22:51:26.026906   56181 controllermanager.go:516] Started "pvc-protection"
W0111 22:51:26.027] W0111 22:51:26.026926   56181 controllermanager.go:508] Skipping "ttl-after-finished"
... skipping 20 lines ...
W0111 22:51:26.084] I0111 22:51:26.080128   56181 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W0111 22:51:26.084] I0111 22:51:26.080184   56181 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0111 22:51:26.084] I0111 22:51:26.080248   56181 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
W0111 22:51:26.084] I0111 22:51:26.080277   56181 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
W0111 22:51:26.085] I0111 22:51:26.080311   56181 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
W0111 22:51:26.085] I0111 22:51:26.080339   56181 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0111 22:51:26.085] E0111 22:51:26.080392   56181 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 22:51:26.085] I0111 22:51:26.080422   56181 controllermanager.go:516] Started "resourcequota"
W0111 22:51:26.086] I0111 22:51:26.080445   56181 resource_quota_controller.go:276] Starting resource quota controller
W0111 22:51:26.086] I0111 22:51:26.080482   56181 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0111 22:51:26.086] I0111 22:51:26.080531   56181 resource_quota_monitor.go:301] QuotaMonitor running
W0111 22:51:26.086] I0111 22:51:26.081116   56181 controllermanager.go:516] Started "job"
W0111 22:51:26.086] I0111 22:51:26.081160   56181 job_controller.go:143] Starting job controller
... skipping 63 lines ...
W0111 22:51:26.300] I0111 22:51:26.201252   56181 controller_utils.go:1021] Waiting for caches to sync for TTL controller
W0111 22:51:26.300] W0111 22:51:26.201199   56181 controllermanager.go:495] "tokencleaner" is disabled
W0111 22:51:26.300] I0111 22:51:26.211361   56181 controllermanager.go:516] Started "namespace"
W0111 22:51:26.300] I0111 22:51:26.211474   56181 namespace_controller.go:186] Starting namespace controller
W0111 22:51:26.300] I0111 22:51:26.211900   56181 controller_utils.go:1021] Waiting for caches to sync for namespace controller
W0111 22:51:26.301] I0111 22:51:26.212049   56181 node_lifecycle_controller.go:77] Sending events to api server
W0111 22:51:26.301] E0111 22:51:26.212097   56181 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0111 22:51:26.301] W0111 22:51:26.212107   56181 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0111 22:51:26.301] I0111 22:51:26.213165   56181 controllermanager.go:516] Started "persistentvolume-binder"
W0111 22:51:26.301] I0111 22:51:26.213258   56181 pv_controller_base.go:271] Starting persistent volume controller
W0111 22:51:26.302] I0111 22:51:26.213453   56181 controller_utils.go:1021] Waiting for caches to sync for persistent volume controller
W0111 22:51:26.302] I0111 22:51:26.214360   56181 controllermanager.go:516] Started "clusterrole-aggregation"
W0111 22:51:26.302] I0111 22:51:26.214491   56181 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0111 22:51:26.302] I0111 22:51:26.214516   56181 controller_utils.go:1021] Waiting for caches to sync for ClusterRoleAggregator controller
W0111 22:51:26.302] I0111 22:51:26.214962   56181 controllermanager.go:516] Started "cronjob"
W0111 22:51:26.302] I0111 22:51:26.215006   56181 cronjob_controller.go:92] Starting CronJob Manager
W0111 22:51:26.303] W0111 22:51:26.271404   56181 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0111 22:51:26.303] I0111 22:51:26.302831   56181 controller_utils.go:1028] Caches are synced for TTL controller
W0111 22:51:26.315] I0111 22:51:26.314703   56181 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0111 22:51:26.326] E0111 22:51:26.325853   56181 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0111 22:51:26.327] I0111 22:51:26.327120   56181 controller_utils.go:1028] Caches are synced for expand controller
W0111 22:51:26.327] E0111 22:51:26.327314   56181 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0111 22:51:26.393] I0111 22:51:26.392579   56181 controller_utils.go:1028] Caches are synced for PV protection controller
I0111 22:51:26.493] Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1656+c81a3fa66fbb59", GitCommit:"c81a3fa66fbb59644436ec515e20faadeed1eb13", GitTreeState:"clean", BuildDate:"2019-01-11T22:49:27Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
I0111 22:51:26.494] Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1656+c81a3fa66fbb59", GitCommit:"c81a3fa66fbb59644436ec515e20faadeed1eb13", GitTreeState:"clean", BuildDate:"2019-01-11T22:49:45Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
W0111 22:51:26.627] I0111 22:51:26.627127   56181 controller_utils.go:1028] Caches are synced for PVC protection controller
W0111 22:51:26.681] I0111 22:51:26.681354   56181 controller_utils.go:1028] Caches are synced for job controller
W0111 22:51:26.683] I0111 22:51:26.683472   56181 controller_utils.go:1028] Caches are synced for HPA controller
... skipping 47 lines ...
I0111 22:51:27.488] Successful: --output json has correct client info
I0111 22:51:27.494] (BSuccessful: --output json has correct server info
I0111 22:51:27.496] (B+++ [0111 22:51:27] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0111 22:51:27.628] Successful: --client --output json has correct client info
I0111 22:51:27.634] (BSuccessful: --client --output json has no server info
I0111 22:51:27.636] (B+++ [0111 22:51:27] Testing kubectl version: compare json output using additional --short flag
W0111 22:51:27.737] E0111 22:51:27.629859   56181 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 22:51:27.737] I0111 22:51:27.693883   56181 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 22:51:27.794] I0111 22:51:27.794206   56181 controller_utils.go:1028] Caches are synced for garbage collector controller
I0111 22:51:27.895] Successful: --short --output client json info is equal to non short result
I0111 22:51:27.895] (BSuccessful: --short --output server json info is equal to non short result
I0111 22:51:27.895] (B+++ [0111 22:51:27] Testing kubectl version: compare json output with yaml output
I0111 22:51:27.929] Successful: --output json/yaml has identical information
... skipping 45 lines ...
I0111 22:51:30.673] +++ command: run_RESTMapper_evaluation_tests
I0111 22:51:30.685] +++ [0111 22:51:30] Creating namespace namespace-1547247090-29581
I0111 22:51:30.776] namespace/namespace-1547247090-29581 created
I0111 22:51:30.857] Context "test" modified.
I0111 22:51:30.863] +++ [0111 22:51:30] Testing RESTMapper
W0111 22:51:30.964] /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 145: 56895 Terminated              kubectl proxy --port=0 --www=. --api-prefix="$1" > ${PROXY_PORT_FILE} 2>&1
I0111 22:51:31.065] +++ [0111 22:51:30] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0111 22:51:31.065] +++ exit code: 0
I0111 22:51:31.127] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0111 22:51:31.127] bindings                                                                      true         Binding
I0111 22:51:31.128] componentstatuses                 cs                                          false        ComponentStatus
I0111 22:51:31.128] configmaps                        cm                                          true         ConfigMap
I0111 22:51:31.128] endpoints                         ep                                          true         Endpoints
... skipping 606 lines ...
I0111 22:51:50.389] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0111 22:51:50.475] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0111 22:51:50.546] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0111 22:51:50.636] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0111 22:51:50.791] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:51:50.971] (Bpod/env-test-pod created
W0111 22:51:51.072] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0111 22:51:51.072] error: setting 'all' parameter but found a non empty selector. 
W0111 22:51:51.072] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 22:51:51.072] I0111 22:51:50.085209   52889 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0111 22:51:51.073] error: min-available and max-unavailable cannot be both specified
I0111 22:51:51.173] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0111 22:51:51.173] Name:               env-test-pod
I0111 22:51:51.173] Namespace:          test-kubectl-describe-pod
I0111 22:51:51.174] Priority:           0
I0111 22:51:51.174] PriorityClassName:  <none>
I0111 22:51:51.174] Node:               <none>
... skipping 145 lines ...
W0111 22:52:03.153] I0111 22:52:01.973649   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247117-28620", Name:"modified", UID:"8258ec91-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-48mcj
W0111 22:52:03.153] I0111 22:52:02.695274   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247117-28620", Name:"modified", UID:"82c80fde-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-7nz8s
I0111 22:52:03.311] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:03.461] (Bpod/valid-pod created
I0111 22:52:03.564] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:52:03.720] (BSuccessful
I0111 22:52:03.720] message:Error from server: cannot restore map from string
I0111 22:52:03.720] has:cannot restore map from string
I0111 22:52:03.807] Successful
I0111 22:52:03.807] message:pod/valid-pod patched (no change)
I0111 22:52:03.807] has:patched (no change)
I0111 22:52:03.890] pod/valid-pod patched
I0111 22:52:03.982] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0111 22:52:04.509] (Bpod/valid-pod patched
I0111 22:52:04.604] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0111 22:52:04.679] (Bpod/valid-pod patched
I0111 22:52:04.775] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0111 22:52:04.939] (Bpod/valid-pod patched
I0111 22:52:05.041] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 22:52:05.216] (B+++ [0111 22:52:05] "kubectl patch with resourceVersion 492" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0111 22:52:05.317] E0111 22:52:03.712005   52889 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0111 22:52:05.448] pod "valid-pod" deleted
I0111 22:52:05.460] pod/valid-pod replaced
I0111 22:52:05.558] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0111 22:52:05.717] (BSuccessful
I0111 22:52:05.718] message:error: --grace-period must have --force specified
I0111 22:52:05.718] has:\-\-grace-period must have \-\-force specified
I0111 22:52:05.875] Successful
I0111 22:52:05.876] message:error: --timeout must have --force specified
I0111 22:52:05.876] has:\-\-timeout must have \-\-force specified
I0111 22:52:06.045] node/node-v1-test created
W0111 22:52:06.145] W0111 22:52:06.044514   56181 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0111 22:52:06.246] node/node-v1-test replaced
I0111 22:52:06.314] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0111 22:52:06.404] (Bnode "node-v1-test" deleted
I0111 22:52:06.508] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 22:52:06.779] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0111 22:52:07.749] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 26 lines ...
I0111 22:52:09.122] (Bpod/redis-master created
I0111 22:52:09.127] pod/valid-pod created
W0111 22:52:09.227] Edit cancelled, no changes made.
W0111 22:52:09.228] Edit cancelled, no changes made.
W0111 22:52:09.228] Edit cancelled, no changes made.
W0111 22:52:09.228] Edit cancelled, no changes made.
W0111 22:52:09.228] error: 'name' already has a value (valid-pod), and --overwrite is false
W0111 22:52:09.228] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 22:52:09.329] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0111 22:52:09.329] (Bcore.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0111 22:52:09.410] (Bpod "redis-master" deleted
I0111 22:52:09.416] pod "valid-pod" deleted
I0111 22:52:09.514] core.sh:622: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 72 lines ...
I0111 22:52:15.761] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0111 22:52:15.763] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:52:15.766] +++ command: run_kubectl_create_error_tests
I0111 22:52:15.778] +++ [0111 22:52:15] Creating namespace namespace-1547247135-4163
I0111 22:52:15.849] namespace/namespace-1547247135-4163 created
I0111 22:52:15.920] Context "test" modified.
I0111 22:52:15.926] +++ [0111 22:52:15] Testing kubectl create with error
W0111 22:52:16.027] Error: required flag(s) "filename" not set
W0111 22:52:16.027] 
W0111 22:52:16.028] 
W0111 22:52:16.028] Examples:
W0111 22:52:16.028]   # Create a pod using the data in pod.json.
W0111 22:52:16.028]   kubectl create -f ./pod.json
W0111 22:52:16.028]   
... skipping 38 lines ...
W0111 22:52:16.033]   kubectl create -f FILENAME [options]
W0111 22:52:16.033] 
W0111 22:52:16.033] Use "kubectl <command> --help" for more information about a given command.
W0111 22:52:16.033] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0111 22:52:16.033] 
W0111 22:52:16.033] required flag(s) "filename" not set
I0111 22:52:16.152] +++ [0111 22:52:16] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0111 22:52:16.253] kubectl convert is DEPRECATED and will be removed in a future version.
W0111 22:52:16.253] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 22:52:16.353] +++ exit code: 0
I0111 22:52:16.354] Recording: run_kubectl_apply_tests
I0111 22:52:16.354] Running command: run_kubectl_apply_tests
I0111 22:52:16.373] 
... skipping 17 lines ...
I0111 22:52:17.506] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0111 22:52:18.332] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0111 22:52:18.426] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:18.584] (Bpod/selector-test-pod created
I0111 22:52:18.685] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 22:52:18.769] (BSuccessful
I0111 22:52:18.770] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 22:52:18.770] has:pods "selector-test-pod-dont-apply" not found
I0111 22:52:18.847] pod "selector-test-pod" deleted
I0111 22:52:18.941] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:19.175] (Bpod/test-pod created (server dry run)
I0111 22:52:19.280] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:19.436] (Bpod/test-pod created
... skipping 8 lines ...
W0111 22:52:20.341] I0111 22:52:20.340636   52889 clientconn.go:551] parsed scheme: ""
W0111 22:52:20.341] I0111 22:52:20.340672   52889 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 22:52:20.341] I0111 22:52:20.340741   52889 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 22:52:20.342] I0111 22:52:20.340841   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:52:20.342] I0111 22:52:20.341454   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:52:20.347] I0111 22:52:20.346688   52889 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0111 22:52:20.435] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0111 22:52:20.535] kind.mygroup.example.com/myobj created (server dry run)
I0111 22:52:20.536] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 22:52:20.628] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:20.796] (Bpod/a created
I0111 22:52:22.101] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0111 22:52:22.191] (BSuccessful
I0111 22:52:22.191] message:Error from server (NotFound): pods "b" not found
I0111 22:52:22.191] has:pods "b" not found
I0111 22:52:22.346] pod/b created
I0111 22:52:22.360] pod/a pruned
I0111 22:52:23.852] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0111 22:52:23.937] (BSuccessful
I0111 22:52:23.937] message:Error from server (NotFound): pods "a" not found
I0111 22:52:23.938] has:pods "a" not found
I0111 22:52:24.019] pod "b" deleted
I0111 22:52:24.116] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:24.270] (Bpod/a created
I0111 22:52:24.364] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0111 22:52:24.452] (BSuccessful
I0111 22:52:24.453] message:Error from server (NotFound): pods "b" not found
I0111 22:52:24.453] has:pods "b" not found
I0111 22:52:24.614] pod/b created
I0111 22:52:24.707] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0111 22:52:24.792] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0111 22:52:24.866] (Bpod "a" deleted
I0111 22:52:24.871] pod "b" deleted
I0111 22:52:25.034] Successful
I0111 22:52:25.034] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0111 22:52:25.035] has:all resources selected for prune without explicitly passing --all
I0111 22:52:25.197] pod/a created
I0111 22:52:25.205] pod/b created
I0111 22:52:25.213] service/prune-svc created
I0111 22:52:26.517] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0111 22:52:26.605] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 126 lines ...
I0111 22:52:38.198] Context "test" modified.
I0111 22:52:38.205] +++ [0111 22:52:38] Testing kubectl create filter
I0111 22:52:38.296] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:38.454] (Bpod/selector-test-pod created
I0111 22:52:38.553] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 22:52:38.640] (BSuccessful
I0111 22:52:38.641] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 22:52:38.641] has:pods "selector-test-pod-dont-apply" not found
I0111 22:52:38.716] pod "selector-test-pod" deleted
I0111 22:52:38.734] +++ exit code: 0
I0111 22:52:38.769] Recording: run_kubectl_apply_deployments_tests
I0111 22:52:38.769] Running command: run_kubectl_apply_deployments_tests
I0111 22:52:38.791] 
... skipping 38 lines ...
I0111 22:52:40.598] (Bdeployment.extensions "my-depl" deleted
I0111 22:52:40.604] replicaset.extensions "my-depl-559b7bc95d" deleted
I0111 22:52:40.608] replicaset.extensions "my-depl-6676598dcb" deleted
I0111 22:52:40.615] pod "my-depl-559b7bc95d-jmfxf" deleted
I0111 22:52:40.619] pod "my-depl-6676598dcb-7wzgd" deleted
W0111 22:52:40.719] I0111 22:52:40.600860   52889 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0111 22:52:40.720] E0111 22:52:40.613465   56181 replica_set.go:450] Sync "namespace-1547247158-4084/my-depl-559b7bc95d" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-559b7bc95d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547247158-4084/my-depl-559b7bc95d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 98a883f9-15f3-11e9-bd57-0242ac110002, UID in object meta: 
I0111 22:52:40.820] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:40.823] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:40.912] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:41.001] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:41.150] (Bdeployment.extensions/nginx created
I0111 22:52:41.246] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0111 22:52:45.447] (BSuccessful
I0111 22:52:45.447] message:Error from server (Conflict): error when applying patch:
I0111 22:52:45.448] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547247158-4084\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0111 22:52:45.448] to:
I0111 22:52:45.448] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0111 22:52:45.448] Name: "nginx", Namespace: "namespace-1547247158-4084"
I0111 22:52:45.449] Object: &{map["metadata":map["labels":map["name":"nginx"] "name":"nginx" "resourceVersion":"709" "creationTimestamp":"2019-01-11T22:52:41Z" "generation":'\x01' "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547247158-4084\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "namespace":"namespace-1547247158-4084" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547247158-4084/deployments/nginx" "uid":"99b44e02-15f3-11e9-bd57-0242ac110002"] "spec":map["selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["protocol":"TCP" "containerPort":'P']] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e']] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03'] "status":map["unavailableReplicas":'\x03' "conditions":[map["lastTransitionTime":"2019-01-11T22:52:41Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2019-01-11T22:52:41Z"]] "observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03'] "kind":"Deployment" "apiVersion":"extensions/v1beta1"]}
I0111 22:52:45.450] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0111 22:52:45.450] has:Error from server (Conflict)
W0111 22:52:45.551] I0111 22:52:41.153217   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247158-4084", Name:"nginx", UID:"99b44e02-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W0111 22:52:45.551] I0111 22:52:41.155960   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247158-4084", Name:"nginx-5d56d6b95f", UID:"99b4c5d3-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-cr98z
W0111 22:52:45.552] I0111 22:52:41.158467   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247158-4084", Name:"nginx-5d56d6b95f", UID:"99b4c5d3-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-jqtkd
W0111 22:52:45.552] I0111 22:52:41.159163   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247158-4084", Name:"nginx-5d56d6b95f", UID:"99b4c5d3-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-mbltq
W0111 22:52:49.670] E0111 22:52:49.669807   56181 replica_set.go:450] Sync "namespace-1547247158-4084/nginx-5d56d6b95f" failed with replicasets.apps "nginx-5d56d6b95f" not found
I0111 22:52:50.651] deployment.extensions/nginx configured
I0111 22:52:50.744] Successful
I0111 22:52:50.745] message:        "name": "nginx2"
I0111 22:52:50.745]           "name": "nginx2"
I0111 22:52:50.745] has:"name": "nginx2"
W0111 22:52:50.846] I0111 22:52:50.654439   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247158-4084", Name:"nginx", UID:"9f5dfa91-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"732", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
... skipping 141 lines ...
I0111 22:52:57.873] +++ [0111 22:52:57] Creating namespace namespace-1547247177-12273
I0111 22:52:57.947] namespace/namespace-1547247177-12273 created
I0111 22:52:58.016] Context "test" modified.
I0111 22:52:58.023] +++ [0111 22:52:58] Testing kubectl get
I0111 22:52:58.115] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:58.198] (BSuccessful
I0111 22:52:58.199] message:Error from server (NotFound): pods "abc" not found
I0111 22:52:58.199] has:pods "abc" not found
I0111 22:52:58.289] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:58.376] (BSuccessful
I0111 22:52:58.376] message:Error from server (NotFound): pods "abc" not found
I0111 22:52:58.377] has:pods "abc" not found
I0111 22:52:58.465] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:58.550] (BSuccessful
I0111 22:52:58.551] message:{
I0111 22:52:58.551]     "apiVersion": "v1",
I0111 22:52:58.551]     "items": [],
... skipping 23 lines ...
I0111 22:52:58.891] has not:No resources found
I0111 22:52:58.973] Successful
I0111 22:52:58.973] message:NAME
I0111 22:52:58.973] has not:No resources found
I0111 22:52:59.059] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:59.171] (BSuccessful
I0111 22:52:59.171] message:error: the server doesn't have a resource type "foobar"
I0111 22:52:59.171] has not:No resources found
I0111 22:52:59.251] Successful
I0111 22:52:59.252] message:No resources found.
I0111 22:52:59.252] has:No resources found
I0111 22:52:59.331] Successful
I0111 22:52:59.331] message:
I0111 22:52:59.331] has not:No resources found
I0111 22:52:59.419] Successful
I0111 22:52:59.419] message:No resources found.
I0111 22:52:59.419] has:No resources found
I0111 22:52:59.522] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:52:59.618] (BSuccessful
I0111 22:52:59.618] message:Error from server (NotFound): pods "abc" not found
I0111 22:52:59.618] has:pods "abc" not found
I0111 22:52:59.620] FAIL!
I0111 22:52:59.621] message:Error from server (NotFound): pods "abc" not found
I0111 22:52:59.621] has not:List
I0111 22:52:59.621] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0111 22:52:59.742] Successful
I0111 22:52:59.743] message:I0111 22:52:59.687634   68637 loader.go:359] Config loaded from file /tmp/tmp.p5SBDwvQBa/.kube/config
I0111 22:52:59.743] I0111 22:52:59.688402   68637 loader.go:359] Config loaded from file /tmp/tmp.p5SBDwvQBa/.kube/config
I0111 22:52:59.743] I0111 22:52:59.689993   68637 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I0111 22:53:03.247] }
I0111 22:53:03.331] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:53:03.569] (B<no value>Successful
I0111 22:53:03.569] message:valid-pod:
I0111 22:53:03.569] has:valid-pod:
I0111 22:53:03.652] Successful
I0111 22:53:03.652] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0111 22:53:03.652] 	template was:
I0111 22:53:03.652] 		{.missing}
I0111 22:53:03.652] 	object given to jsonpath engine was:
I0111 22:53:03.653] 		map[string]interface {}{"status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1547247182-4163", "selfLink":"/api/v1/namespaces/namespace-1547247182-4163/pods/valid-pod", "uid":"a6d28aad-15f3-11e9-bd57-0242ac110002", "resourceVersion":"805", "creationTimestamp":"2019-01-11T22:53:03Z"}, "spec":map[string]interface {}{"terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"memory":"512Mi", "cpu":"1"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname"}}, "restartPolicy":"Always"}}
I0111 22:53:03.653] has:missing is not found
I0111 22:53:03.736] Successful
I0111 22:53:03.737] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0111 22:53:03.737] 	template was:
I0111 22:53:03.737] 		{{.missing}}
I0111 22:53:03.737] 	raw data was:
I0111 22:53:03.738] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-11T22:53:03Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547247182-4163","resourceVersion":"805","selfLink":"/api/v1/namespaces/namespace-1547247182-4163/pods/valid-pod","uid":"a6d28aad-15f3-11e9-bd57-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0111 22:53:03.738] 	object given to template engine was:
I0111 22:53:03.738] 		map[metadata:map[resourceVersion:805 selfLink:/api/v1/namespaces/namespace-1547247182-4163/pods/valid-pod uid:a6d28aad-15f3-11e9-bd57-0242ac110002 creationTimestamp:2019-01-11T22:53:03Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547247182-4163] spec:map[enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst] status:map[phase:Pending qosClass:Guaranteed] apiVersion:v1 kind:Pod]
I0111 22:53:03.738] has:map has no entry for key "missing"
W0111 22:53:03.839] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0111 22:53:04.817] E0111 22:53:04.816743   69032 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0111 22:53:04.918] Successful
I0111 22:53:04.918] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 22:53:04.918] valid-pod   0/1     Pending   0          0s
I0111 22:53:04.918] has:STATUS
I0111 22:53:04.918] Successful
... skipping 80 lines ...
I0111 22:53:07.094]   terminationGracePeriodSeconds: 30
I0111 22:53:07.094] status:
I0111 22:53:07.094]   phase: Pending
I0111 22:53:07.094]   qosClass: Guaranteed
I0111 22:53:07.094] has:name: valid-pod
I0111 22:53:07.094] Successful
I0111 22:53:07.094] message:Error from server (NotFound): pods "invalid-pod" not found
I0111 22:53:07.094] has:"invalid-pod" not found
I0111 22:53:07.160] pod "valid-pod" deleted
I0111 22:53:07.257] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:07.411] (Bpod/redis-master created
I0111 22:53:07.415] pod/valid-pod created
I0111 22:53:07.511] Successful
... skipping 317 lines ...
I0111 22:53:11.772] Running command: run_create_secret_tests
I0111 22:53:11.792] 
I0111 22:53:11.793] +++ Running case: test-cmd.run_create_secret_tests 
I0111 22:53:11.796] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:53:11.799] +++ command: run_create_secret_tests
I0111 22:53:11.887] Successful
I0111 22:53:11.887] message:Error from server (NotFound): secrets "mysecret" not found
I0111 22:53:11.888] has:secrets "mysecret" not found
W0111 22:53:11.988] I0111 22:53:10.915283   52889 clientconn.go:551] parsed scheme: ""
W0111 22:53:11.988] I0111 22:53:10.915328   52889 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 22:53:11.989] I0111 22:53:10.915363   52889 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 22:53:11.989] I0111 22:53:10.915405   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:53:11.989] I0111 22:53:10.915886   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:53:11.989] No resources found.
W0111 22:53:11.989] No resources found.
I0111 22:53:12.090] Successful
I0111 22:53:12.090] message:Error from server (NotFound): secrets "mysecret" not found
I0111 22:53:12.090] has:secrets "mysecret" not found
I0111 22:53:12.090] Successful
I0111 22:53:12.091] message:user-specified
I0111 22:53:12.091] has:user-specified
I0111 22:53:12.121] Successful
I0111 22:53:12.196] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"ac353b59-15f3-11e9-bd57-0242ac110002","resourceVersion":"879","creationTimestamp":"2019-01-11T22:53:12Z"}}
... skipping 80 lines ...
I0111 22:53:14.116] has:Timeout exceeded while reading body
I0111 22:53:14.199] Successful
I0111 22:53:14.199] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 22:53:14.199] valid-pod   0/1     Pending   0          2s
I0111 22:53:14.199] has:valid-pod
I0111 22:53:14.266] Successful
I0111 22:53:14.267] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0111 22:53:14.267] has:Invalid timeout value
I0111 22:53:14.344] pod "valid-pod" deleted
I0111 22:53:14.363] +++ exit code: 0
I0111 22:53:14.397] Recording: run_crd_tests
I0111 22:53:14.398] Running command: run_crd_tests
I0111 22:53:14.418] 
... skipping 167 lines ...
I0111 22:53:18.805] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0111 22:53:18.887] (Bfoo.company.com/test patched
I0111 22:53:18.991] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0111 22:53:19.073] (Bfoo.company.com/test patched
W0111 22:53:19.174] I0111 22:53:17.043188   52889 controller.go:606] quota admission added evaluator for: foos.company.com
I0111 22:53:19.274] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0111 22:53:19.343] (B+++ [0111 22:53:19] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0111 22:53:19.404] {
I0111 22:53:19.404]     "apiVersion": "company.com/v1",
I0111 22:53:19.404]     "kind": "Foo",
I0111 22:53:19.405]     "metadata": {
I0111 22:53:19.405]         "annotations": {
I0111 22:53:19.405]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 112 lines ...
I0111 22:53:20.844] bar.company.com "test" deleted
W0111 22:53:20.944] I0111 22:53:20.582352   52889 controller.go:606] quota admission added evaluator for: bars.company.com
W0111 22:53:20.945] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 71600 Killed                  while [ ${tries} -lt 10 ]; do
W0111 22:53:20.945]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0111 22:53:20.945] done
W0111 22:53:20.945] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 71599 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0111 22:53:27.939] E0111 22:53:27.938282   56181 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos"]
W0111 22:53:28.255] I0111 22:53:28.254606   56181 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 22:53:28.256] I0111 22:53:28.255595   52889 clientconn.go:551] parsed scheme: ""
W0111 22:53:28.256] I0111 22:53:28.255621   52889 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 22:53:28.256] I0111 22:53:28.255659   52889 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 22:53:28.256] I0111 22:53:28.255759   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:53:28.257] I0111 22:53:28.256165   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 62 lines ...
I0111 22:53:34.193] (Bnamespace/non-native-resources created
I0111 22:53:34.342] bar.company.com/test created
I0111 22:53:34.438] crd.sh:456: Successful get bars {{len .items}}: 1
I0111 22:53:34.516] (Bnamespace "non-native-resources" deleted
I0111 22:53:39.795] crd.sh:459: Successful get bars {{len .items}}: 0
I0111 22:53:39.975] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0111 22:53:40.076] Error from server (NotFound): namespaces "non-native-resources" not found
I0111 22:53:40.177] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0111 22:53:40.193] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 22:53:40.299] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0111 22:53:40.334] +++ exit code: 0
I0111 22:53:40.411] Recording: run_cmd_with_img_tests
I0111 22:53:40.411] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0111 22:53:40.733] I0111 22:53:40.732579   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247220-13447", Name:"test1-fb488bd5d", UID:"bd360bfb-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"990", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-z272p
I0111 22:53:40.833] Successful
I0111 22:53:40.834] message:deployment.apps/test1 created
I0111 22:53:40.834] has:deployment.apps/test1 created
I0111 22:53:40.834] deployment.extensions "test1" deleted
I0111 22:53:40.913] Successful
I0111 22:53:40.914] message:error: Invalid image name "InvalidImageName": invalid reference format
I0111 22:53:40.914] has:error: Invalid image name "InvalidImageName": invalid reference format
I0111 22:53:40.931] +++ exit code: 0
I0111 22:53:40.975] Recording: run_recursive_resources_tests
I0111 22:53:40.975] Running command: run_recursive_resources_tests
I0111 22:53:41.000] 
I0111 22:53:41.002] +++ Running case: test-cmd.run_recursive_resources_tests 
I0111 22:53:41.005] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0111 22:53:41.182] Context "test" modified.
I0111 22:53:41.289] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:41.571] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:41.574] (BSuccessful
I0111 22:53:41.574] message:pod/busybox0 created
I0111 22:53:41.574] pod/busybox1 created
I0111 22:53:41.574] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 22:53:41.575] has:error validating data: kind not set
I0111 22:53:41.680] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:41.884] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0111 22:53:41.886] (BSuccessful
I0111 22:53:41.887] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:41.887] has:Object 'Kind' is missing
I0111 22:53:41.994] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:42.288] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 22:53:42.290] (BSuccessful
I0111 22:53:42.290] message:pod/busybox0 replaced
I0111 22:53:42.290] pod/busybox1 replaced
I0111 22:53:42.290] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 22:53:42.291] has:error validating data: kind not set
I0111 22:53:42.388] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:42.498] (BSuccessful
I0111 22:53:42.498] message:Name:               busybox0
I0111 22:53:42.498] Namespace:          namespace-1547247221-103
I0111 22:53:42.499] Priority:           0
I0111 22:53:42.499] PriorityClassName:  <none>
... skipping 159 lines ...
I0111 22:53:42.515] has:Object 'Kind' is missing
I0111 22:53:42.608] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:42.807] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0111 22:53:42.809] (BSuccessful
I0111 22:53:42.810] message:pod/busybox0 annotated
I0111 22:53:42.810] pod/busybox1 annotated
I0111 22:53:42.810] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:42.810] has:Object 'Kind' is missing
I0111 22:53:42.914] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:43.212] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 22:53:43.215] (BSuccessful
I0111 22:53:43.216] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 22:53:43.216] pod/busybox0 configured
I0111 22:53:43.216] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 22:53:43.216] pod/busybox1 configured
I0111 22:53:43.216] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 22:53:43.216] has:error validating data: kind not set
I0111 22:53:43.314] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:43.482] (Bdeployment.apps/nginx created
W0111 22:53:43.583] I0111 22:53:43.485700   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247221-103", Name:"nginx", UID:"bedb434c-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1014", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W0111 22:53:43.583] I0111 22:53:43.488754   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247221-103", Name:"nginx-6f6bb85d9c", UID:"bedbec54-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-rhdjt
W0111 22:53:43.584] I0111 22:53:43.491580   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247221-103", Name:"nginx-6f6bb85d9c", UID:"bedbec54-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-k82kc
W0111 22:53:43.584] I0111 22:53:43.491879   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247221-103", Name:"nginx-6f6bb85d9c", UID:"bedbec54-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1015", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-5hghx
... skipping 46 lines ...
I0111 22:53:43.974] deployment.extensions "nginx" deleted
I0111 22:53:44.080] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:44.258] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:44.261] (BSuccessful
I0111 22:53:44.261] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0111 22:53:44.261] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 22:53:44.262] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:44.262] has:Object 'Kind' is missing
I0111 22:53:44.364] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:44.460] (BSuccessful
I0111 22:53:44.460] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:44.460] has:busybox0:busybox1:
I0111 22:53:44.462] Successful
I0111 22:53:44.462] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:44.463] has:Object 'Kind' is missing
I0111 22:53:44.564] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:44.665] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:44.763] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0111 22:53:44.766] (BSuccessful
I0111 22:53:44.767] message:pod/busybox0 labeled
I0111 22:53:44.767] pod/busybox1 labeled
I0111 22:53:44.767] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:44.767] has:Object 'Kind' is missing
I0111 22:53:44.867] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:44.956] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:45.052] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0111 22:53:45.054] (BSuccessful
I0111 22:53:45.054] message:pod/busybox0 patched
I0111 22:53:45.054] pod/busybox1 patched
I0111 22:53:45.055] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:45.055] has:Object 'Kind' is missing
I0111 22:53:45.152] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:45.350] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:45.352] (BSuccessful
I0111 22:53:45.353] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 22:53:45.353] pod "busybox0" force deleted
I0111 22:53:45.353] pod "busybox1" force deleted
I0111 22:53:45.354] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:53:45.354] has:Object 'Kind' is missing
I0111 22:53:45.452] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:45.622] (Breplicationcontroller/busybox0 created
I0111 22:53:45.628] replicationcontroller/busybox1 created
W0111 22:53:45.729] kubectl convert is DEPRECATED and will be removed in a future version.
W0111 22:53:45.729] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0111 22:53:45.729] I0111 22:53:44.685713   56181 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0111 22:53:45.730] I0111 22:53:45.626760   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247221-103", Name:"busybox0", UID:"c021ef26-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1045", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-jhgbz
W0111 22:53:45.730] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 22:53:45.730] I0111 22:53:45.630573   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247221-103", Name:"busybox1", UID:"c022c2bd-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1047", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-hsfnm
I0111 22:53:45.831] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:45.850] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:45.952] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 22:53:46.054] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 22:53:46.258] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 22:53:46.360] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 22:53:46.363] (BSuccessful
I0111 22:53:46.364] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0111 22:53:46.364] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0111 22:53:46.364] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:46.364] has:Object 'Kind' is missing
I0111 22:53:46.458] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0111 22:53:46.554] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0111 22:53:46.663] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:46.765] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 22:53:46.867] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 22:53:47.066] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 22:53:47.169] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 22:53:47.172] (BSuccessful
I0111 22:53:47.172] message:service/busybox0 exposed
I0111 22:53:47.172] service/busybox1 exposed
I0111 22:53:47.173] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:47.173] has:Object 'Kind' is missing
I0111 22:53:47.272] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:47.374] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 22:53:47.471] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 22:53:47.665] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0111 22:53:47.752] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0111 22:53:47.754] (BSuccessful
I0111 22:53:47.754] message:replicationcontroller/busybox0 scaled
I0111 22:53:47.754] replicationcontroller/busybox1 scaled
I0111 22:53:47.755] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:47.755] has:Object 'Kind' is missing
I0111 22:53:47.841] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:48.019] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:48.021] (BSuccessful
I0111 22:53:48.021] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 22:53:48.022] replicationcontroller "busybox0" force deleted
I0111 22:53:48.022] replicationcontroller "busybox1" force deleted
I0111 22:53:48.022] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:48.022] has:Object 'Kind' is missing
I0111 22:53:48.105] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:48.251] (Bdeployment.apps/nginx1-deployment created
I0111 22:53:48.255] deployment.apps/nginx0-deployment created
I0111 22:53:48.356] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0111 22:53:48.447] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 22:53:48.629] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 22:53:48.631] (BSuccessful
I0111 22:53:48.631] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0111 22:53:48.631] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0111 22:53:48.632] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:53:48.632] has:Object 'Kind' is missing
I0111 22:53:48.719] deployment.apps/nginx1-deployment paused
I0111 22:53:48.723] deployment.apps/nginx0-deployment paused
I0111 22:53:48.822] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0111 22:53:48.825] (BSuccessful
I0111 22:53:48.825] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:53:48.825] has:Object 'Kind' is missing
I0111 22:53:48.916] deployment.apps/nginx1-deployment resumed
I0111 22:53:48.920] deployment.apps/nginx0-deployment resumed
W0111 22:53:49.021] I0111 22:53:47.564892   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247221-103", Name:"busybox0", UID:"c021ef26-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1066", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-6vwfg
W0111 22:53:49.021] I0111 22:53:47.573193   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247221-103", Name:"busybox1", UID:"c022c2bd-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jqlsc
W0111 22:53:49.022] I0111 22:53:48.253969   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247221-103", Name:"nginx1-deployment", UID:"c1b2fc60-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0111 22:53:49.022] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 22:53:49.022] I0111 22:53:48.256917   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247221-103", Name:"nginx1-deployment-75f6fc6747", UID:"c1b37ef6-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-r2pr5
W0111 22:53:49.022] I0111 22:53:48.259917   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247221-103", Name:"nginx0-deployment", UID:"c1b3bf57-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0111 22:53:49.023] I0111 22:53:48.259917   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247221-103", Name:"nginx1-deployment-75f6fc6747", UID:"c1b37ef6-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-krznq
W0111 22:53:49.023] I0111 22:53:48.264396   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247221-103", Name:"nginx0-deployment-b6bb4ccbb", UID:"c1b47269-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-c9rm9
W0111 22:53:49.023] I0111 22:53:48.266844   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247221-103", Name:"nginx0-deployment-b6bb4ccbb", UID:"c1b47269-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-nwws2
I0111 22:53:49.124] generic-resources.sh:408: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
... skipping 6 lines ...
I0111 22:53:49.147] 1         <none>
I0111 22:53:49.147] 
I0111 22:53:49.147] deployment.apps/nginx0-deployment 
I0111 22:53:49.147] REVISION  CHANGE-CAUSE
I0111 22:53:49.147] 1         <none>
I0111 22:53:49.147] 
I0111 22:53:49.148] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:53:49.148] has:nginx0-deployment
I0111 22:53:49.149] Successful
I0111 22:53:49.149] message:deployment.apps/nginx1-deployment 
I0111 22:53:49.150] REVISION  CHANGE-CAUSE
I0111 22:53:49.150] 1         <none>
I0111 22:53:49.150] 
I0111 22:53:49.150] deployment.apps/nginx0-deployment 
I0111 22:53:49.150] REVISION  CHANGE-CAUSE
I0111 22:53:49.150] 1         <none>
I0111 22:53:49.150] 
I0111 22:53:49.151] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:53:49.151] has:nginx1-deployment
I0111 22:53:49.153] Successful
I0111 22:53:49.153] message:deployment.apps/nginx1-deployment 
I0111 22:53:49.153] REVISION  CHANGE-CAUSE
I0111 22:53:49.153] 1         <none>
I0111 22:53:49.153] 
I0111 22:53:49.154] deployment.apps/nginx0-deployment 
I0111 22:53:49.154] REVISION  CHANGE-CAUSE
I0111 22:53:49.154] 1         <none>
I0111 22:53:49.154] 
I0111 22:53:49.154] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:53:49.154] has:Object 'Kind' is missing
I0111 22:53:49.233] deployment.apps "nginx1-deployment" force deleted
I0111 22:53:49.240] deployment.apps "nginx0-deployment" force deleted
W0111 22:53:49.341] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 22:53:49.342] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:53:50.340] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:50.493] (Breplicationcontroller/busybox0 created
I0111 22:53:50.498] replicationcontroller/busybox1 created
W0111 22:53:50.599] I0111 22:53:50.496706   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247221-103", Name:"busybox0", UID:"c3092478-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-hxw9f
W0111 22:53:50.599] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 22:53:50.599] I0111 22:53:50.500207   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247221-103", Name:"busybox1", UID:"c309f30e-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-q4k58
I0111 22:53:50.700] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:53:50.700] (BSuccessful
I0111 22:53:50.700] message:no rollbacker has been implemented for "ReplicationController"
I0111 22:53:50.700] no rollbacker has been implemented for "ReplicationController"
I0111 22:53:50.701] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0111 22:53:50.701] message:no rollbacker has been implemented for "ReplicationController"
I0111 22:53:50.701] no rollbacker has been implemented for "ReplicationController"
I0111 22:53:50.702] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:50.702] has:Object 'Kind' is missing
I0111 22:53:50.792] Successful
I0111 22:53:50.793] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:50.793] error: replicationcontrollers "busybox0" pausing is not supported
I0111 22:53:50.793] error: replicationcontrollers "busybox1" pausing is not supported
I0111 22:53:50.793] has:Object 'Kind' is missing
I0111 22:53:50.795] Successful
I0111 22:53:50.795] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:50.795] error: replicationcontrollers "busybox0" pausing is not supported
I0111 22:53:50.795] error: replicationcontrollers "busybox1" pausing is not supported
I0111 22:53:50.795] has:replicationcontrollers "busybox0" pausing is not supported
I0111 22:53:50.797] Successful
I0111 22:53:50.797] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:50.798] error: replicationcontrollers "busybox0" pausing is not supported
I0111 22:53:50.798] error: replicationcontrollers "busybox1" pausing is not supported
I0111 22:53:50.798] has:replicationcontrollers "busybox1" pausing is not supported
I0111 22:53:50.888] Successful
I0111 22:53:50.888] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:50.888] error: replicationcontrollers "busybox0" resuming is not supported
I0111 22:53:50.889] error: replicationcontrollers "busybox1" resuming is not supported
I0111 22:53:50.889] has:Object 'Kind' is missing
I0111 22:53:50.889] Successful
I0111 22:53:50.890] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:50.890] error: replicationcontrollers "busybox0" resuming is not supported
I0111 22:53:50.890] error: replicationcontrollers "busybox1" resuming is not supported
I0111 22:53:50.890] has:replicationcontrollers "busybox0" resuming is not supported
I0111 22:53:50.892] Successful
I0111 22:53:50.892] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:50.892] error: replicationcontrollers "busybox0" resuming is not supported
I0111 22:53:50.893] error: replicationcontrollers "busybox1" resuming is not supported
I0111 22:53:50.893] has:replicationcontrollers "busybox0" resuming is not supported
I0111 22:53:50.970] replicationcontroller "busybox0" force deleted
I0111 22:53:50.975] replicationcontroller "busybox1" force deleted
W0111 22:53:51.076] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 22:53:51.076] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:53:51.997] +++ exit code: 0
I0111 22:53:52.046] Recording: run_namespace_tests
I0111 22:53:52.046] Running command: run_namespace_tests
I0111 22:53:52.067] 
I0111 22:53:52.069] +++ Running case: test-cmd.run_namespace_tests 
I0111 22:53:52.071] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:53:52.074] +++ command: run_namespace_tests
I0111 22:53:52.083] +++ [0111 22:53:52] Testing kubectl(v1:namespaces)
I0111 22:53:52.152] namespace/my-namespace created
I0111 22:53:52.243] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0111 22:53:52.321] (Bnamespace "my-namespace" deleted
I0111 22:53:57.438] namespace/my-namespace condition met
I0111 22:53:57.526] Successful
I0111 22:53:57.526] message:Error from server (NotFound): namespaces "my-namespace" not found
I0111 22:53:57.526] has: not found
I0111 22:53:57.639] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0111 22:53:57.708] (Bnamespace/other created
I0111 22:53:57.801] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0111 22:53:57.888] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:58.040] (Bpod/valid-pod created
I0111 22:53:58.137] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:53:58.222] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:53:58.301] (BSuccessful
I0111 22:53:58.301] message:error: a resource cannot be retrieved by name across all namespaces
I0111 22:53:58.301] has:a resource cannot be retrieved by name across all namespaces
I0111 22:53:58.395] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:53:58.491] (Bpod "valid-pod" force deleted
I0111 22:53:58.589] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:53:58.667] (Bnamespace "other" deleted
W0111 22:53:58.767] E0111 22:53:57.990465   56181 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 22:53:58.768] I0111 22:53:58.407629   56181 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 22:53:58.768] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 22:53:58.768] I0111 22:53:58.507973   56181 controller_utils.go:1028] Caches are synced for garbage collector controller
W0111 22:54:01.146] I0111 22:54:01.145653   56181 horizontal.go:313] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1547247221-103
W0111 22:54:01.152] I0111 22:54:01.151475   56181 horizontal.go:313] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1547247221-103
W0111 22:54:02.435] I0111 22:54:02.435174   56181 namespace_controller.go:171] Namespace has been deleted my-namespace
... skipping 114 lines ...
I0111 22:54:19.059] +++ command: run_client_config_tests
I0111 22:54:19.072] +++ [0111 22:54:19] Creating namespace namespace-1547247259-23240
I0111 22:54:19.144] namespace/namespace-1547247259-23240 created
I0111 22:54:19.214] Context "test" modified.
I0111 22:54:19.221] +++ [0111 22:54:19] Testing client config
I0111 22:54:19.290] Successful
I0111 22:54:19.290] message:error: stat missing: no such file or directory
I0111 22:54:19.291] has:missing: no such file or directory
I0111 22:54:19.361] Successful
I0111 22:54:19.362] message:error: stat missing: no such file or directory
I0111 22:54:19.362] has:missing: no such file or directory
I0111 22:54:19.431] Successful
I0111 22:54:19.431] message:error: stat missing: no such file or directory
I0111 22:54:19.431] has:missing: no such file or directory
I0111 22:54:19.500] Successful
I0111 22:54:19.501] message:Error in configuration: context was not found for specified context: missing-context
I0111 22:54:19.501] has:context was not found for specified context: missing-context
I0111 22:54:19.571] Successful
I0111 22:54:19.571] message:error: no server found for cluster "missing-cluster"
I0111 22:54:19.572] has:no server found for cluster "missing-cluster"
I0111 22:54:19.643] Successful
I0111 22:54:19.643] message:error: auth info "missing-user" does not exist
I0111 22:54:19.643] has:auth info "missing-user" does not exist
I0111 22:54:19.784] Successful
I0111 22:54:19.784] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0111 22:54:19.784] has:Error loading config file
I0111 22:54:19.852] Successful
I0111 22:54:19.852] message:error: stat missing-config: no such file or directory
I0111 22:54:19.852] has:no such file or directory
I0111 22:54:19.865] +++ exit code: 0
I0111 22:54:19.907] Recording: run_service_accounts_tests
I0111 22:54:19.907] Running command: run_service_accounts_tests
I0111 22:54:19.927] 
I0111 22:54:19.929] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0111 22:54:26.669] Labels:                        run=pi
I0111 22:54:26.669] Annotations:                   <none>
I0111 22:54:26.669] Schedule:                      59 23 31 2 *
I0111 22:54:26.669] Concurrency Policy:            Allow
I0111 22:54:26.669] Suspend:                       False
I0111 22:54:26.669] Successful Job History Limit:  824642493336
I0111 22:54:26.669] Failed Job History Limit:      1
I0111 22:54:26.669] Starting Deadline Seconds:     <unset>
I0111 22:54:26.670] Selector:                      <unset>
I0111 22:54:26.670] Parallelism:                   <unset>
I0111 22:54:26.670] Completions:                   <unset>
I0111 22:54:26.670] Pod Template:
I0111 22:54:26.670]   Labels:  run=pi
... skipping 31 lines ...
I0111 22:54:27.184]                 job-name=test-job
I0111 22:54:27.185]                 run=pi
I0111 22:54:27.185] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0111 22:54:27.185] Parallelism:    1
I0111 22:54:27.185] Completions:    1
I0111 22:54:27.185] Start Time:     Fri, 11 Jan 2019 22:54:26 +0000
I0111 22:54:27.185] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0111 22:54:27.185] Pod Template:
I0111 22:54:27.186]   Labels:  controller-uid=d8c030bb-15f3-11e9-bd57-0242ac110002
I0111 22:54:27.186]            job-name=test-job
I0111 22:54:27.186]            run=pi
I0111 22:54:27.186]   Containers:
I0111 22:54:27.186]    pi:
... skipping 329 lines ...
I0111 22:54:36.676]   selector:
I0111 22:54:36.676]     role: padawan
I0111 22:54:36.676]   sessionAffinity: None
I0111 22:54:36.676]   type: ClusterIP
I0111 22:54:36.676] status:
I0111 22:54:36.677]   loadBalancer: {}
W0111 22:54:36.777] error: you must specify resources by --filename when --local is set.
W0111 22:54:36.777] Example resource specifications include:
W0111 22:54:36.777]    '-f rsrc.yaml'
W0111 22:54:36.777]    '--filename=rsrc.json'
I0111 22:54:36.878] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0111 22:54:36.998] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0111 22:54:37.078] (Bservice "redis-master" deleted
... skipping 93 lines ...
I0111 22:54:42.868] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 22:54:42.960] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 22:54:43.065] (Bdaemonset.extensions/bind rolled back
I0111 22:54:43.165] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 22:54:43.254] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 22:54:43.357] (BSuccessful
I0111 22:54:43.357] message:error: unable to find specified revision 1000000 in history
I0111 22:54:43.357] has:unable to find specified revision
I0111 22:54:43.446] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 22:54:43.535] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 22:54:43.637] (Bdaemonset.extensions/bind rolled back
I0111 22:54:43.730] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0111 22:54:43.819] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0111 22:54:45.097] Namespace:    namespace-1547247284-10515
I0111 22:54:45.097] Selector:     app=guestbook,tier=frontend
I0111 22:54:45.097] Labels:       app=guestbook
I0111 22:54:45.098]               tier=frontend
I0111 22:54:45.098] Annotations:  <none>
I0111 22:54:45.098] Replicas:     3 current / 3 desired
I0111 22:54:45.098] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:45.098] Pod Template:
I0111 22:54:45.098]   Labels:  app=guestbook
I0111 22:54:45.098]            tier=frontend
I0111 22:54:45.099]   Containers:
I0111 22:54:45.099]    php-redis:
I0111 22:54:45.099]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 22:54:45.214] Namespace:    namespace-1547247284-10515
I0111 22:54:45.214] Selector:     app=guestbook,tier=frontend
I0111 22:54:45.214] Labels:       app=guestbook
I0111 22:54:45.214]               tier=frontend
I0111 22:54:45.214] Annotations:  <none>
I0111 22:54:45.215] Replicas:     3 current / 3 desired
I0111 22:54:45.215] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:45.215] Pod Template:
I0111 22:54:45.215]   Labels:  app=guestbook
I0111 22:54:45.215]            tier=frontend
I0111 22:54:45.215]   Containers:
I0111 22:54:45.215]    php-redis:
I0111 22:54:45.216]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 22:54:45.321] Namespace:    namespace-1547247284-10515
I0111 22:54:45.322] Selector:     app=guestbook,tier=frontend
I0111 22:54:45.322] Labels:       app=guestbook
I0111 22:54:45.322]               tier=frontend
I0111 22:54:45.322] Annotations:  <none>
I0111 22:54:45.322] Replicas:     3 current / 3 desired
I0111 22:54:45.323] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:45.323] Pod Template:
I0111 22:54:45.323]   Labels:  app=guestbook
I0111 22:54:45.323]            tier=frontend
I0111 22:54:45.323]   Containers:
I0111 22:54:45.323]    php-redis:
I0111 22:54:45.323]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 5 lines ...
I0111 22:54:45.324]     Environment:
I0111 22:54:45.324]       GET_HOSTS_FROM:  dns
I0111 22:54:45.324]     Mounts:            <none>
I0111 22:54:45.324]   Volumes:             <none>
I0111 22:54:45.325] (B
W0111 22:54:45.425] I0111 22:54:40.681328   52889 controller.go:606] quota admission added evaluator for: daemonsets.extensions
W0111 22:54:45.428] E0111 22:54:43.648322   56181 daemon_controller.go:302] namespace-1547247281-8953/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1547247281-8953", SelfLink:"/apis/apps/v1/namespaces/namespace-1547247281-8953/daemonsets/bind", UID:"e18d1ee1-15f3-11e9-bd57-0242ac110002", ResourceVersion:"1356", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63682844081, loc:(*time.Location)(0x6962be0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1547247281-8953\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00311c680), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00417a1f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003328780), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc00311c740), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0039be8b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00417a270)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0111 22:54:45.429] I0111 22:54:44.443298   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e3307097-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kw84x
W0111 22:54:45.429] I0111 22:54:44.446742   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e3307097-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8g2dz
W0111 22:54:45.429] I0111 22:54:44.446785   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e3307097-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9ltxb
W0111 22:54:45.429] I0111 22:54:44.855377   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e36fb811-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zqllj
W0111 22:54:45.430] I0111 22:54:44.858993   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e36fb811-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7fktg
W0111 22:54:45.430] I0111 22:54:44.859062   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e36fb811-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7wxt6
... skipping 2 lines ...
I0111 22:54:45.531] Namespace:    namespace-1547247284-10515
I0111 22:54:45.531] Selector:     app=guestbook,tier=frontend
I0111 22:54:45.531] Labels:       app=guestbook
I0111 22:54:45.531]               tier=frontend
I0111 22:54:45.531] Annotations:  <none>
I0111 22:54:45.531] Replicas:     3 current / 3 desired
I0111 22:54:45.531] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:45.532] Pod Template:
I0111 22:54:45.532]   Labels:  app=guestbook
I0111 22:54:45.532]            tier=frontend
I0111 22:54:45.532]   Containers:
I0111 22:54:45.532]    php-redis:
I0111 22:54:45.532]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 22:54:45.585] Namespace:    namespace-1547247284-10515
I0111 22:54:45.585] Selector:     app=guestbook,tier=frontend
I0111 22:54:45.585] Labels:       app=guestbook
I0111 22:54:45.585]               tier=frontend
I0111 22:54:45.585] Annotations:  <none>
I0111 22:54:45.585] Replicas:     3 current / 3 desired
I0111 22:54:45.585] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:45.586] Pod Template:
I0111 22:54:45.586]   Labels:  app=guestbook
I0111 22:54:45.586]            tier=frontend
I0111 22:54:45.586]   Containers:
I0111 22:54:45.586]    php-redis:
I0111 22:54:45.586]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 22:54:45.696] Namespace:    namespace-1547247284-10515
I0111 22:54:45.696] Selector:     app=guestbook,tier=frontend
I0111 22:54:45.696] Labels:       app=guestbook
I0111 22:54:45.697]               tier=frontend
I0111 22:54:45.697] Annotations:  <none>
I0111 22:54:45.697] Replicas:     3 current / 3 desired
I0111 22:54:45.697] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:45.697] Pod Template:
I0111 22:54:45.697]   Labels:  app=guestbook
I0111 22:54:45.697]            tier=frontend
I0111 22:54:45.697]   Containers:
I0111 22:54:45.698]    php-redis:
I0111 22:54:45.698]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 22:54:45.800] Namespace:    namespace-1547247284-10515
I0111 22:54:45.800] Selector:     app=guestbook,tier=frontend
I0111 22:54:45.800] Labels:       app=guestbook
I0111 22:54:45.800]               tier=frontend
I0111 22:54:45.801] Annotations:  <none>
I0111 22:54:45.801] Replicas:     3 current / 3 desired
I0111 22:54:45.801] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:45.801] Pod Template:
I0111 22:54:45.801]   Labels:  app=guestbook
I0111 22:54:45.801]            tier=frontend
I0111 22:54:45.801]   Containers:
I0111 22:54:45.801]    php-redis:
I0111 22:54:45.801]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0111 22:54:45.903] Namespace:    namespace-1547247284-10515
I0111 22:54:45.903] Selector:     app=guestbook,tier=frontend
I0111 22:54:45.903] Labels:       app=guestbook
I0111 22:54:45.904]               tier=frontend
I0111 22:54:45.904] Annotations:  <none>
I0111 22:54:45.904] Replicas:     3 current / 3 desired
I0111 22:54:45.904] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:45.904] Pod Template:
I0111 22:54:45.904]   Labels:  app=guestbook
I0111 22:54:45.904]            tier=frontend
I0111 22:54:45.904]   Containers:
I0111 22:54:45.905]    php-redis:
I0111 22:54:45.905]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0111 22:54:46.722] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0111 22:54:46.811] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0111 22:54:46.898] (Breplicationcontroller/frontend scaled
I0111 22:54:46.994] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0111 22:54:47.075] (Breplicationcontroller "frontend" deleted
W0111 22:54:47.176] I0111 22:54:46.089041   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e36fb811-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-zqllj
W0111 22:54:47.177] error: Expected replicas to be 3, was 2
W0111 22:54:47.177] I0111 22:54:46.629949   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e36fb811-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1397", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hdgpv
W0111 22:54:47.177] I0111 22:54:46.903571   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e36fb811-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-hdgpv
W0111 22:54:47.238] I0111 22:54:47.237744   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"redis-master", UID:"e4db144b-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-vgkm5
I0111 22:54:47.339] replicationcontroller/redis-master created
I0111 22:54:47.391] replicationcontroller/redis-slave created
W0111 22:54:47.491] I0111 22:54:47.393543   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"redis-slave", UID:"e4f2fc4f-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-5jzlm
... skipping 36 lines ...
I0111 22:54:48.972] service "expose-test-deployment" deleted
I0111 22:54:49.069] Successful
I0111 22:54:49.069] message:service/expose-test-deployment exposed
I0111 22:54:49.069] has:service/expose-test-deployment exposed
I0111 22:54:49.149] service "expose-test-deployment" deleted
I0111 22:54:49.244] Successful
I0111 22:54:49.244] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 22:54:49.244] See 'kubectl expose -h' for help and examples
I0111 22:54:49.244] has:invalid deployment: no selectors
I0111 22:54:49.334] Successful
I0111 22:54:49.334] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 22:54:49.334] See 'kubectl expose -h' for help and examples
I0111 22:54:49.334] has:invalid deployment: no selectors
I0111 22:54:49.485] deployment.apps/nginx-deployment created
I0111 22:54:49.586] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0111 22:54:49.678] (Bservice/nginx-deployment exposed
I0111 22:54:49.775] core.sh:1137: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
... skipping 23 lines ...
I0111 22:54:51.351] service "frontend" deleted
I0111 22:54:51.358] service "frontend-2" deleted
I0111 22:54:51.364] service "frontend-3" deleted
I0111 22:54:51.371] service "frontend-4" deleted
I0111 22:54:51.378] service "frontend-5" deleted
I0111 22:54:51.476] Successful
I0111 22:54:51.477] message:error: cannot expose a Node
I0111 22:54:51.477] has:cannot expose
I0111 22:54:51.565] Successful
I0111 22:54:51.565] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0111 22:54:51.565] has:metadata.name: Invalid value
I0111 22:54:51.657] Successful
I0111 22:54:51.657] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0111 22:54:53.795] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 22:54:53.884] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 22:54:53.964] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 22:54:54.065] I0111 22:54:53.359076   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e881561c-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1638", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2jvng
W0111 22:54:54.066] I0111 22:54:53.361575   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e881561c-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1638", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9wbsv
W0111 22:54:54.066] I0111 22:54:53.361721   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247284-10515", Name:"frontend", UID:"e881561c-15f3-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"1638", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zw5hl
W0111 22:54:54.066] Error: required flag(s) "max" not set
W0111 22:54:54.066] 
W0111 22:54:54.066] 
W0111 22:54:54.067] Examples:
W0111 22:54:54.067]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 22:54:54.067]   kubectl autoscale deployment foo --min=2 --max=10
W0111 22:54:54.067]   
... skipping 54 lines ...
I0111 22:54:54.274]           limits:
I0111 22:54:54.274]             cpu: 300m
I0111 22:54:54.274]           requests:
I0111 22:54:54.274]             cpu: 300m
I0111 22:54:54.274]       terminationGracePeriodSeconds: 0
I0111 22:54:54.274] status: {}
W0111 22:54:54.375] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0111 22:54:54.499] deployment.apps/nginx-deployment-resources created
W0111 22:54:54.600] I0111 22:54:54.502152   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources", UID:"e92f9e5a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0111 22:54:54.600] I0111 22:54:54.505099   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources-69c96fd869", UID:"e9302df1-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-bx9mh
W0111 22:54:54.600] I0111 22:54:54.508399   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources-69c96fd869", UID:"e9302df1-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-wkzl2
W0111 22:54:54.601] I0111 22:54:54.508452   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources-69c96fd869", UID:"e9302df1-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-p4z8q
I0111 22:54:54.701] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 85 lines ...
I0111 22:54:55.884]   observedGeneration: 4
I0111 22:54:55.884]   replicas: 4
I0111 22:54:55.884]   unavailableReplicas: 4
I0111 22:54:55.884]   updatedReplicas: 1
W0111 22:54:55.984] I0111 22:54:54.876189   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources", UID:"e92f9e5a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1673", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W0111 22:54:55.985] I0111 22:54:54.878507   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources-6c5996c457", UID:"e96939e3-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1674", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-zrxmb
W0111 22:54:55.985] error: unable to find container named redis
W0111 22:54:55.985] I0111 22:54:55.241596   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources", UID:"e92f9e5a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1683", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0111 22:54:55.986] I0111 22:54:55.246532   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources-69c96fd869", UID:"e9302df1-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-wkzl2
W0111 22:54:55.986] I0111 22:54:55.247127   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources", UID:"e92f9e5a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1686", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0111 22:54:55.986] I0111 22:54:55.250345   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources-5f4579485f", UID:"e9a02317-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-rzgjh
W0111 22:54:55.987] I0111 22:54:55.518791   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources", UID:"e92f9e5a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1703", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 1
W0111 22:54:55.987] I0111 22:54:55.523797   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources-69c96fd869", UID:"e9302df1-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1707", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-bx9mh
W0111 22:54:55.987] I0111 22:54:55.525756   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources", UID:"e92f9e5a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1706", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 1
W0111 22:54:55.988] I0111 22:54:55.528602   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247284-10515", Name:"nginx-deployment-resources-ff8d89cb6", UID:"e9ca5829-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-l2w6j
W0111 22:54:55.988] error: you must specify resources by --filename when --local is set.
W0111 22:54:55.988] Example resource specifications include:
W0111 22:54:55.988]    '-f rsrc.yaml'
W0111 22:54:55.988]    '--filename=rsrc.json'
I0111 22:54:56.089] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0111 22:54:56.136] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0111 22:54:56.229] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0111 22:54:57.672]                 pod-template-hash=55c9b846cc
I0111 22:54:57.673] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0111 22:54:57.673]                 deployment.kubernetes.io/max-replicas: 2
I0111 22:54:57.673]                 deployment.kubernetes.io/revision: 1
I0111 22:54:57.673] Controlled By:  Deployment/test-nginx-apps
I0111 22:54:57.673] Replicas:       1 current / 1 desired
I0111 22:54:57.673] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:54:57.674] Pod Template:
I0111 22:54:57.674]   Labels:  app=test-nginx-apps
I0111 22:54:57.674]            pod-template-hash=55c9b846cc
I0111 22:54:57.674]   Containers:
I0111 22:54:57.674]    nginx:
I0111 22:54:57.674]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W0111 22:55:01.909] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0111 22:55:01.909] I0111 22:55:01.390651   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247296-11004", Name:"nginx", UID:"ecf87b13-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1876", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 1
W0111 22:55:01.910] I0111 22:55:01.394368   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247296-11004", Name:"nginx-9486b7cb7", UID:"ed4b412f-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1877", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-fxn2b
I0111 22:55:02.906] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 22:55:03.106] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 22:55:03.211] (Bdeployment.extensions/nginx rolled back
W0111 22:55:03.312] error: unable to find specified revision 1000000 in history
I0111 22:55:04.311] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 22:55:04.399] (Bdeployment.extensions/nginx paused
W0111 22:55:04.512] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0111 22:55:04.613] deployment.extensions/nginx resumed
I0111 22:55:04.719] deployment.extensions/nginx rolled back
I0111 22:55:04.898]     deployment.kubernetes.io/revision-history: 1,3
W0111 22:55:05.086] error: desired revision (3) is different from the running revision (5)
I0111 22:55:05.240] deployment.apps/nginx2 created
I0111 22:55:05.329] deployment.extensions "nginx2" deleted
I0111 22:55:05.414] deployment.extensions "nginx" deleted
I0111 22:55:05.511] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:55:05.670] (Bdeployment.apps/nginx-deployment created
I0111 22:55:05.771] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 25 lines ...
W0111 22:55:08.078] I0111 22:55:05.673181   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment", UID:"efd84396-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0111 22:55:08.079] I0111 22:55:05.675614   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment-646d4f779d", UID:"efd8c321-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-sdgtx
W0111 22:55:08.079] I0111 22:55:05.678188   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment-646d4f779d", UID:"efd8c321-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-4bptt
W0111 22:55:08.079] I0111 22:55:05.678280   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment-646d4f779d", UID:"efd8c321-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-slhlx
W0111 22:55:08.080] I0111 22:55:06.045659   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment", UID:"efd84396-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1953", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W0111 22:55:08.080] I0111 22:55:06.048786   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment-85db47bbdb", UID:"f0119f21-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-v6vr6
W0111 22:55:08.080] error: unable to find container named "redis"
W0111 22:55:08.081] I0111 22:55:07.223088   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment", UID:"efd84396-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1972", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W0111 22:55:08.081] I0111 22:55:07.227985   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment-646d4f779d", UID:"efd8c321-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1976", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-sdgtx
W0111 22:55:08.082] I0111 22:55:07.228266   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment", UID:"efd84396-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1975", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 1
W0111 22:55:08.082] I0111 22:55:07.232311   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment-dc756cc6", UID:"f0c46de8-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-ksthv
W0111 22:55:08.082] I0111 22:55:07.978548   56181 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment", UID:"f137f422-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0111 22:55:08.083] I0111 22:55:07.981181   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247296-11004", Name:"nginx-deployment-646d4f779d", UID:"f1388968-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2005", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-gzl7r
... skipping 71 lines ...
I0111 22:55:11.767] Namespace:    namespace-1547247309-17494
I0111 22:55:11.767] Selector:     app=guestbook,tier=frontend
I0111 22:55:11.767] Labels:       app=guestbook
I0111 22:55:11.767]               tier=frontend
I0111 22:55:11.767] Annotations:  <none>
I0111 22:55:11.767] Replicas:     3 current / 3 desired
I0111 22:55:11.767] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:11.768] Pod Template:
I0111 22:55:11.768]   Labels:  app=guestbook
I0111 22:55:11.768]            tier=frontend
I0111 22:55:11.768]   Containers:
I0111 22:55:11.768]    php-redis:
I0111 22:55:11.768]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 22:55:11.877] Namespace:    namespace-1547247309-17494
I0111 22:55:11.877] Selector:     app=guestbook,tier=frontend
I0111 22:55:11.877] Labels:       app=guestbook
I0111 22:55:11.877]               tier=frontend
I0111 22:55:11.877] Annotations:  <none>
I0111 22:55:11.877] Replicas:     3 current / 3 desired
I0111 22:55:11.878] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:11.878] Pod Template:
I0111 22:55:11.878]   Labels:  app=guestbook
I0111 22:55:11.878]            tier=frontend
I0111 22:55:11.878]   Containers:
I0111 22:55:11.878]    php-redis:
I0111 22:55:11.878]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
I0111 22:55:11.879]   Type    Reason            Age   From                   Message
I0111 22:55:11.879]   ----    ------            ----  ----                   -------
I0111 22:55:11.880]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-n4br6
I0111 22:55:11.880]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-l9lnv
I0111 22:55:11.880]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-qxqkt
I0111 22:55:11.880] (B
W0111 22:55:11.980] E0111 22:55:09.898664   56181 replica_set.go:450] Sync "namespace-1547247296-11004/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
W0111 22:55:11.981] E0111 22:55:09.948825   56181 replica_set.go:450] Sync "namespace-1547247296-11004/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W0111 22:55:11.981] E0111 22:55:09.998823   56181 replica_set.go:450] Sync "namespace-1547247296-11004/nginx-deployment-75bf89d86f" failed with replicasets.apps "nginx-deployment-75bf89d86f" not found
W0111 22:55:11.981] I0111 22:55:10.332556   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f29ecec1-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2134", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x2h5g
W0111 22:55:11.982] I0111 22:55:10.335327   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f29ecec1-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2134", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gcjjt
W0111 22:55:11.982] I0111 22:55:10.335414   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f29ecec1-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2134", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b6rpk
W0111 22:55:11.982] I0111 22:55:10.771683   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend-no-cascade", UID:"f2e20aed-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2151", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-2brwk
W0111 22:55:11.982] I0111 22:55:10.774843   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend-no-cascade", UID:"f2e20aed-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2151", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-q4fxp
W0111 22:55:11.983] I0111 22:55:10.774887   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend-no-cascade", UID:"f2e20aed-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2151", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-h2clq
W0111 22:55:11.983] E0111 22:55:10.998548   56181 replica_set.go:450] Sync "namespace-1547247309-17494/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W0111 22:55:11.983] I0111 22:55:11.534249   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f356a93a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2171", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n4br6
W0111 22:55:11.983] I0111 22:55:11.537183   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f356a93a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2171", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l9lnv
W0111 22:55:11.984] I0111 22:55:11.537224   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f356a93a-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2171", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qxqkt
I0111 22:55:12.084] apps.sh:541: Successful describe
I0111 22:55:12.084] Name:         frontend
I0111 22:55:12.084] Namespace:    namespace-1547247309-17494
I0111 22:55:12.085] Selector:     app=guestbook,tier=frontend
I0111 22:55:12.085] Labels:       app=guestbook
I0111 22:55:12.085]               tier=frontend
I0111 22:55:12.085] Annotations:  <none>
I0111 22:55:12.085] Replicas:     3 current / 3 desired
I0111 22:55:12.085] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:12.085] Pod Template:
I0111 22:55:12.085]   Labels:  app=guestbook
I0111 22:55:12.085]            tier=frontend
I0111 22:55:12.085]   Containers:
I0111 22:55:12.086]    php-redis:
I0111 22:55:12.086]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0111 22:55:12.106] Namespace:    namespace-1547247309-17494
I0111 22:55:12.106] Selector:     app=guestbook,tier=frontend
I0111 22:55:12.106] Labels:       app=guestbook
I0111 22:55:12.106]               tier=frontend
I0111 22:55:12.106] Annotations:  <none>
I0111 22:55:12.107] Replicas:     3 current / 3 desired
I0111 22:55:12.107] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:12.107] Pod Template:
I0111 22:55:12.107]   Labels:  app=guestbook
I0111 22:55:12.107]            tier=frontend
I0111 22:55:12.107]   Containers:
I0111 22:55:12.107]    php-redis:
I0111 22:55:12.107]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 22:55:12.246] Namespace:    namespace-1547247309-17494
I0111 22:55:12.247] Selector:     app=guestbook,tier=frontend
I0111 22:55:12.247] Labels:       app=guestbook
I0111 22:55:12.247]               tier=frontend
I0111 22:55:12.247] Annotations:  <none>
I0111 22:55:12.247] Replicas:     3 current / 3 desired
I0111 22:55:12.247] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:12.247] Pod Template:
I0111 22:55:12.247]   Labels:  app=guestbook
I0111 22:55:12.247]            tier=frontend
I0111 22:55:12.247]   Containers:
I0111 22:55:12.248]    php-redis:
I0111 22:55:12.248]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 22:55:12.359] Namespace:    namespace-1547247309-17494
I0111 22:55:12.359] Selector:     app=guestbook,tier=frontend
I0111 22:55:12.359] Labels:       app=guestbook
I0111 22:55:12.359]               tier=frontend
I0111 22:55:12.359] Annotations:  <none>
I0111 22:55:12.359] Replicas:     3 current / 3 desired
I0111 22:55:12.359] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:12.360] Pod Template:
I0111 22:55:12.360]   Labels:  app=guestbook
I0111 22:55:12.360]            tier=frontend
I0111 22:55:12.360]   Containers:
I0111 22:55:12.360]    php-redis:
I0111 22:55:12.360]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 22:55:12.462] Namespace:    namespace-1547247309-17494
I0111 22:55:12.462] Selector:     app=guestbook,tier=frontend
I0111 22:55:12.462] Labels:       app=guestbook
I0111 22:55:12.462]               tier=frontend
I0111 22:55:12.462] Annotations:  <none>
I0111 22:55:12.462] Replicas:     3 current / 3 desired
I0111 22:55:12.463] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:12.463] Pod Template:
I0111 22:55:12.463]   Labels:  app=guestbook
I0111 22:55:12.463]            tier=frontend
I0111 22:55:12.463]   Containers:
I0111 22:55:12.463]    php-redis:
I0111 22:55:12.463]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0111 22:55:12.571] Namespace:    namespace-1547247309-17494
I0111 22:55:12.571] Selector:     app=guestbook,tier=frontend
I0111 22:55:12.571] Labels:       app=guestbook
I0111 22:55:12.571]               tier=frontend
I0111 22:55:12.571] Annotations:  <none>
I0111 22:55:12.571] Replicas:     3 current / 3 desired
I0111 22:55:12.571] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:12.572] Pod Template:
I0111 22:55:12.572]   Labels:  app=guestbook
I0111 22:55:12.572]            tier=frontend
I0111 22:55:12.572]   Containers:
I0111 22:55:12.572]    php-redis:
I0111 22:55:12.572]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0111 22:55:17.804] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 22:55:17.893] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 22:55:17.970] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 22:55:18.071] I0111 22:55:17.362916   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f6cff475-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jrg28
W0111 22:55:18.072] I0111 22:55:17.365011   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f6cff475-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dpqv7
W0111 22:55:18.072] I0111 22:55:17.365402   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547247309-17494", Name:"frontend", UID:"f6cff475-15f3-11e9-bd57-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vxh5z
W0111 22:55:18.072] Error: required flag(s) "max" not set
W0111 22:55:18.072] 
W0111 22:55:18.072] 
W0111 22:55:18.072] Examples:
W0111 22:55:18.073]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 22:55:18.073]   kubectl autoscale deployment foo --min=2 --max=10
W0111 22:55:18.073]   
... skipping 88 lines ...
I0111 22:55:21.032] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 22:55:21.121] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 22:55:21.228] (Bstatefulset.apps/nginx rolled back
I0111 22:55:21.322] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 22:55:21.413] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 22:55:21.516] (BSuccessful
I0111 22:55:21.516] message:error: unable to find specified revision 1000000 in history
I0111 22:55:21.517] has:unable to find specified revision
I0111 22:55:21.605] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 22:55:21.697] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 22:55:21.797] (Bstatefulset.apps/nginx rolled back
I0111 22:55:21.897] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0111 22:55:21.990] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0111 22:55:23.780] Name:         mock
I0111 22:55:23.780] Namespace:    namespace-1547247322-32118
I0111 22:55:23.780] Selector:     app=mock
I0111 22:55:23.780] Labels:       app=mock
I0111 22:55:23.781] Annotations:  <none>
I0111 22:55:23.781] Replicas:     1 current / 1 desired
I0111 22:55:23.781] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:23.781] Pod Template:
I0111 22:55:23.781]   Labels:  app=mock
I0111 22:55:23.781]   Containers:
I0111 22:55:23.781]    mock-container:
I0111 22:55:23.781]     Image:        k8s.gcr.io/pause:2.0
I0111 22:55:23.782]     Port:         9949/TCP
... skipping 56 lines ...
I0111 22:55:25.937] Name:         mock
I0111 22:55:25.937] Namespace:    namespace-1547247322-32118
I0111 22:55:25.937] Selector:     app=mock
I0111 22:55:25.937] Labels:       app=mock
I0111 22:55:25.937] Annotations:  <none>
I0111 22:55:25.937] Replicas:     1 current / 1 desired
I0111 22:55:25.937] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:25.937] Pod Template:
I0111 22:55:25.938]   Labels:  app=mock
I0111 22:55:25.938]   Containers:
I0111 22:55:25.938]    mock-container:
I0111 22:55:25.938]     Image:        k8s.gcr.io/pause:2.0
I0111 22:55:25.938]     Port:         9949/TCP
... skipping 56 lines ...
I0111 22:55:28.095] Name:         mock
I0111 22:55:28.095] Namespace:    namespace-1547247322-32118
I0111 22:55:28.095] Selector:     app=mock
I0111 22:55:28.096] Labels:       app=mock
I0111 22:55:28.096] Annotations:  <none>
I0111 22:55:28.096] Replicas:     1 current / 1 desired
I0111 22:55:28.096] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:28.096] Pod Template:
I0111 22:55:28.096]   Labels:  app=mock
I0111 22:55:28.096]   Containers:
I0111 22:55:28.096]    mock-container:
I0111 22:55:28.096]     Image:        k8s.gcr.io/pause:2.0
I0111 22:55:28.096]     Port:         9949/TCP
... skipping 42 lines ...
I0111 22:55:30.238] Namespace:    namespace-1547247322-32118
I0111 22:55:30.238] Selector:     app=mock
I0111 22:55:30.238] Labels:       app=mock
I0111 22:55:30.238]               status=replaced
I0111 22:55:30.238] Annotations:  <none>
I0111 22:55:30.238] Replicas:     1 current / 1 desired
I0111 22:55:30.239] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:30.239] Pod Template:
I0111 22:55:30.239]   Labels:  app=mock
I0111 22:55:30.239]   Containers:
I0111 22:55:30.239]    mock-container:
I0111 22:55:30.239]     Image:        k8s.gcr.io/pause:2.0
I0111 22:55:30.240]     Port:         9949/TCP
... skipping 11 lines ...
I0111 22:55:30.241] Namespace:    namespace-1547247322-32118
I0111 22:55:30.242] Selector:     app=mock2
I0111 22:55:30.242] Labels:       app=mock2
I0111 22:55:30.242]               status=replaced
I0111 22:55:30.242] Annotations:  <none>
I0111 22:55:30.242] Replicas:     1 current / 1 desired
I0111 22:55:30.242] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:55:30.242] Pod Template:
I0111 22:55:30.242]   Labels:  app=mock2
I0111 22:55:30.243]   Containers:
I0111 22:55:30.243]    mock-container:
I0111 22:55:30.243]     Image:        k8s.gcr.io/pause:2.0
I0111 22:55:30.243]     Port:         9949/TCP
... skipping 107 lines ...
I0111 22:55:34.993] +++ [0111 22:55:34] Testing persistent volumes
I0111 22:55:35.079] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:55:35.224] (Bpersistentvolume/pv0001 created
I0111 22:55:35.318] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0111 22:55:35.405] (Bpersistentvolume "pv0001" deleted
W0111 22:55:35.506] I0111 22:55:34.178779   56181 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547247322-32118", Name:"mock", UID:"00d61677-15f4-11e9-bd57-0242ac110002", APIVersion:"v1", ResourceVersion:"2630", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-blvv5
W0111 22:55:35.507] E0111 22:55:35.231427   56181 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0111 22:55:35.632] persistentvolume/pv0002 created
W0111 22:55:35.733] E0111 22:55:35.635036   56181 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0111 22:55:35.834] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0111 22:55:35.885] (Bpersistentvolume "pv0002" deleted
I0111 22:55:36.114] persistentvolume/pv0003 created
W0111 22:55:36.215] E0111 22:55:36.116806   56181 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0111 22:55:36.315] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0111 22:55:36.370] (Bpersistentvolume "pv0003" deleted
I0111 22:55:36.510] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:55:36.530] (B+++ exit code: 0
I0111 22:55:36.578] Recording: run_persistent_volume_claims_tests
I0111 22:55:36.579] Running command: run_persistent_volume_claims_tests
... skipping 466 lines ...
I0111 22:55:42.219] yes
I0111 22:55:42.219] has:the server doesn't have a resource type
I0111 22:55:42.294] Successful
I0111 22:55:42.295] message:yes
I0111 22:55:42.295] has:yes
I0111 22:55:42.369] Successful
I0111 22:55:42.370] message:error: --subresource can not be used with NonResourceURL
I0111 22:55:42.370] has:subresource can not be used with NonResourceURL
I0111 22:55:42.456] Successful
I0111 22:55:42.545] Successful
I0111 22:55:42.545] message:yes
I0111 22:55:42.545] 0
I0111 22:55:42.545] has:0
... skipping 6 lines ...
I0111 22:55:42.750] role.rbac.authorization.k8s.io/testing-R reconciled
I0111 22:55:42.844] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0111 22:55:42.936] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0111 22:55:43.038] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0111 22:55:43.136] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0111 22:55:43.218] (BSuccessful
I0111 22:55:43.219] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0111 22:55:43.219] has:only rbac.authorization.k8s.io/v1 is supported
I0111 22:55:43.312] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0111 22:55:43.319] role.rbac.authorization.k8s.io "testing-R" deleted
I0111 22:55:43.328] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0111 22:55:43.337] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0111 22:55:43.349] Recording: run_retrieve_multiple_tests
... skipping 1021 lines ...
I0111 22:56:10.261] message:node/127.0.0.1 already uncordoned (dry run)
I0111 22:56:10.261] has:already uncordoned
I0111 22:56:10.351] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0111 22:56:10.429] (Bnode/127.0.0.1 labeled
I0111 22:56:10.521] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0111 22:56:10.590] (BSuccessful
I0111 22:56:10.590] message:error: cannot specify both a node name and a --selector option
I0111 22:56:10.590] See 'kubectl drain -h' for help and examples
I0111 22:56:10.590] has:cannot specify both a node name
I0111 22:56:10.658] Successful
I0111 22:56:10.658] message:error: USAGE: cordon NODE [flags]
I0111 22:56:10.658] See 'kubectl cordon -h' for help and examples
I0111 22:56:10.659] has:error\: USAGE\: cordon NODE
I0111 22:56:10.733] node/127.0.0.1 already uncordoned
I0111 22:56:10.808] Successful
I0111 22:56:10.808] message:error: You must provide one or more resources by argument or filename.
I0111 22:56:10.809] Example resource specifications include:
I0111 22:56:10.809]    '-f rsrc.yaml'
I0111 22:56:10.809]    '--filename=rsrc.json'
I0111 22:56:10.809]    '<resource> <name>'
I0111 22:56:10.809]    '<resource>'
I0111 22:56:10.809] has:must provide one or more resources
... skipping 15 lines ...
I0111 22:56:11.243] Successful
I0111 22:56:11.243] message:The following kubectl-compatible plugins are available:
I0111 22:56:11.243] 
I0111 22:56:11.243] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0111 22:56:11.243]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0111 22:56:11.243] 
I0111 22:56:11.244] error: one plugin warning was found
I0111 22:56:11.244] has:kubectl-version overwrites existing command: "kubectl version"
I0111 22:56:11.317] Successful
I0111 22:56:11.318] message:The following kubectl-compatible plugins are available:
I0111 22:56:11.318] 
I0111 22:56:11.318] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 22:56:11.318] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0111 22:56:11.318]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 22:56:11.319] 
I0111 22:56:11.319] error: one plugin warning was found
I0111 22:56:11.319] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0111 22:56:11.392] Successful
I0111 22:56:11.393] message:The following kubectl-compatible plugins are available:
I0111 22:56:11.393] 
I0111 22:56:11.393] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 22:56:11.393] has:plugins are available
I0111 22:56:11.467] Successful
I0111 22:56:11.468] message:
I0111 22:56:11.468] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0111 22:56:11.468] error: unable to find any kubectl plugins in your PATH
I0111 22:56:11.468] has:unable to find any kubectl plugins in your PATH
I0111 22:56:11.546] Successful
I0111 22:56:11.546] message:I am plugin foo
I0111 22:56:11.546] has:plugin foo
I0111 22:56:11.618] Successful
I0111 22:56:11.619] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1656+c81a3fa66fbb59", GitCommit:"c81a3fa66fbb59644436ec515e20faadeed1eb13", GitTreeState:"clean", BuildDate:"2019-01-11T22:49:27Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0111 22:56:11.694] 
I0111 22:56:11.696] +++ Running case: test-cmd.run_impersonation_tests 
I0111 22:56:11.699] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:56:11.701] +++ command: run_impersonation_tests
I0111 22:56:11.711] +++ [0111 22:56:11] Testing impersonation
I0111 22:56:11.780] Successful
I0111 22:56:11.781] message:error: requesting groups or user-extra for  without impersonating a user
I0111 22:56:11.781] has:without impersonating a user
I0111 22:56:11.940] certificatesigningrequest.certificates.k8s.io/foo created
I0111 22:56:12.032] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0111 22:56:12.124] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0111 22:56:12.206] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0111 22:56:12.376] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 83 lines ...
W0111 22:56:12.937] I0111 22:56:12.928455   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:56:12.937] I0111 22:56:12.928467   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:56:12.937] I0111 22:56:12.928490   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:56:12.938] I0111 22:56:12.928496   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:56:12.938] I0111 22:56:12.928564   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:56:12.938] I0111 22:56:12.928575   52889 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:56:12.938] E0111 22:56:12.928569   52889 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W0111 22:56:12.980] + make test-integration
I0111 22:56:13.080] No resources found
I0111 22:56:13.081] pod "test-pod-1" force deleted
I0111 22:56:13.081] +++ [0111 22:56:12] TESTS PASSED
I0111 22:56:13.081] junit report dir: /workspace/artifacts
I0111 22:56:13.081] +++ [0111 22:56:12] Clean up complete
... skipping 231 lines ...
I0111 23:08:11.367] ok  	k8s.io/kubernetes/test/integration/replicationcontroller	56.582s
I0111 23:08:11.367] [restful] 2019/01/11 23:00:13 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:44345/swaggerapi
I0111 23:08:11.367] [restful] 2019/01/11 23:00:13 log.go:33: [restful/swagger] https://127.0.0.1:44345/swaggerui/ is mapped to folder /swagger-ui/
I0111 23:08:11.367] [restful] 2019/01/11 23:00:16 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:44345/swaggerapi
I0111 23:08:11.367] [restful] 2019/01/11 23:00:16 log.go:33: [restful/swagger] https://127.0.0.1:44345/swaggerui/ is mapped to folder /swagger-ui/
I0111 23:08:11.367] ok  	k8s.io/kubernetes/test/integration/scale	11.499s
I0111 23:08:11.367] FAIL	k8s.io/kubernetes/test/integration/scheduler	472.881s
I0111 23:08:11.368] ok  	k8s.io/kubernetes/test/integration/scheduler_perf	1.098s
I0111 23:08:11.368] ok  	k8s.io/kubernetes/test/integration/secrets	5.097s
I0111 23:08:11.368] ok  	k8s.io/kubernetes/test/integration/serviceaccount	67.818s
I0111 23:08:11.368] [restful] 2019/01/11 23:01:18 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:39937/swaggerapi
I0111 23:08:11.368] [restful] 2019/01/11 23:01:18 log.go:33: [restful/swagger] https://127.0.0.1:39937/swaggerui/ is mapped to folder /swagger-ui/
I0111 23:08:11.368] [restful] 2019/01/11 23:01:21 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:39937/swaggerapi
... skipping 7 lines ...
I0111 23:08:11.369] [restful] 2019/01/11 23:01:58 log.go:33: [restful/swagger] https://127.0.0.1:40205/swaggerui/ is mapped to folder /swagger-ui/
I0111 23:08:11.369] ok  	k8s.io/kubernetes/test/integration/tls	14.703s
I0111 23:08:11.369] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.235s
I0111 23:08:11.370] ok  	k8s.io/kubernetes/test/integration/volume	93.157s
I0111 23:08:11.370] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	146.453s
I0111 23:08:25.956] +++ [0111 23:08:25] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-225622.xml
I0111 23:08:25.959] Makefile:184: recipe for target 'test' failed
I0111 23:08:25.969] +++ [0111 23:08:25] Cleaning up etcd
W0111 23:08:26.070] make[1]: *** [test] Error 1
W0111 23:08:26.070] !!! [0111 23:08:25] Call tree:
W0111 23:08:26.070] !!! [0111 23:08:25]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0111 23:08:26.256] +++ [0111 23:08:26] Integration test cleanup complete
I0111 23:08:26.256] Makefile:203: recipe for target 'test-integration' failed
W0111 23:08:26.357] make: *** [test-integration] Error 1
W0111 23:08:28.613] Traceback (most recent call last):
W0111 23:08:28.613]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0111 23:08:28.613]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0111 23:08:28.613]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0111 23:08:28.613]     check(*cmd)
W0111 23:08:28.614]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0111 23:08:28.614]     subprocess.check_call(cmd)
W0111 23:08:28.614]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0111 23:08:28.667]     raise CalledProcessError(retcode, cmd)
W0111 23:08:28.668] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0111 23:08:28.674] Command failed
I0111 23:08:28.674] process 695 exited with code 1 after 24.7m
E0111 23:08:28.675] FAIL: pull-kubernetes-integration
I0111 23:08:28.675] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 23:08:29.191] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 23:08:29.241] process 125770 exited with code 0 after 0.0m
I0111 23:08:29.241] Call:  gcloud config get-value account
I0111 23:08:29.541] process 125782 exited with code 0 after 0.0m
I0111 23:08:29.542] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 23:08:29.542] Upload result and artifacts...
I0111 23:08:29.542] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/41079
I0111 23:08:29.542] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/41079/artifacts
W0111 23:08:30.652] CommandException: One or more URLs matched no objects.
E0111 23:08:30.791] Command failed
I0111 23:08:30.792] process 125794 exited with code 1 after 0.0m
W0111 23:08:30.792] Remote dir gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/41079/artifacts not exist yet
I0111 23:08:30.792] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/batch/pull-kubernetes-integration/41079/artifacts
I0111 23:08:34.375] process 125936 exited with code 0 after 0.1m
W0111 23:08:34.375] metadata path /workspace/_artifacts/metadata.json does not exist
W0111 23:08:34.375] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...