This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-10 13:23
Elapsed26m8s
Revision
Buildergke-prow-containerd-pool-99179761-fpb3
pode2852037-14da-11e9-a09b-0a580a6c03f2
infra-commit369b3897b
pode2852037-14da-11e9-a09b-0a580a6c03f2
repok8s.io/kubernetes
repo-commit3d9c6eb9e6e1759683d2c6cda726aad8a0e85604
repos{u'k8s.io/kubernetes': u'master'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPDBInPreemption 12s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPDBInPreemption$
I0110 13:42:17.447667  122899 feature_gate.go:226] feature gates: &{map[PodPriority:true]}
I0110 13:42:17.448386  122899 services.go:33] Network range for service cluster IPs is unspecified. Defaulting to {10.0.0.0 ffffff00}.
I0110 13:42:17.448404  122899 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0110 13:42:17.448413  122899 master.go:273] Node port range unspecified. Defaulting to 30000-32767.
I0110 13:42:17.448422  122899 master.go:229] Using reconciler: 
I0110 13:42:17.450285  122899 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.450377  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.450390  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.450425  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.450513  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.450763  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.450991  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.451493  122899 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0110 13:42:17.451533  122899 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.451536  122899 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0110 13:42:17.451764  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.451778  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.451811  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.451886  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.452354  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.452399  122899 store.go:1414] Monitoring events count at <storage-prefix>//events
I0110 13:42:17.452437  122899 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.452511  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.452534  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.452534  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.452563  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.452647  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.452891  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.452961  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.452976  122899 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges
I0110 13:42:17.452995  122899 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0110 13:42:17.452999  122899 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.453058  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.453068  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.453094  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.453667  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.453930  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.454025  122899 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0110 13:42:17.454118  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.454173  122899 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0110 13:42:17.454175  122899 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.454233  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.454243  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.454269  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.454313  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.454483  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.454555  122899 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets
I0110 13:42:17.454564  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.454594  122899 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0110 13:42:17.454701  122899 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.454765  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.454777  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.454801  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.454934  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.455214  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.455341  122899 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0110 13:42:17.455456  122899 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.455509  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.455520  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.455546  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.455595  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.455660  122899 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0110 13:42:17.455912  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.456091  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.456186  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.456212  122899 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0110 13:42:17.456311  122899 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0110 13:42:17.456338  122899 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.456404  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.456416  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.456444  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.456533  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.456751  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.456822  122899 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps
I0110 13:42:17.456989  122899 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.457051  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.457069  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.457101  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.457182  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.457211  122899 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0110 13:42:17.457363  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.457947  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.458048  122899 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces
I0110 13:42:17.458102  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.458191  122899 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0110 13:42:17.458193  122899 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.458264  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.458290  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.458327  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.458399  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.458662  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.458699  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.458787  122899 store.go:1414] Monitoring endpoints count at <storage-prefix>//endpoints
I0110 13:42:17.458925  122899 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.458994  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.459007  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.459035  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.459081  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.459105  122899 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/endpoints
I0110 13:42:17.459297  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.459400  122899 store.go:1414] Monitoring nodes count at <storage-prefix>//nodes
I0110 13:42:17.459471  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.459525  122899 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.459592  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.459623  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.459658  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.459704  122899 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/nodes
I0110 13:42:17.459831  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.460260  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.460365  122899 store.go:1414] Monitoring pods count at <storage-prefix>//pods
I0110 13:42:17.460481  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.460481  122899 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0110 13:42:17.460475  122899 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.460657  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.460672  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.460699  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.460755  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.461005  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.461081  122899 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0110 13:42:17.461202  122899 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.461264  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.461268  122899 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0110 13:42:17.461275  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.461292  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.461301  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.461333  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.461496  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.461571  122899 store.go:1414] Monitoring services count at <storage-prefix>//services
I0110 13:42:17.461597  122899 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.461695  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.461705  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.461730  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.461762  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.461784  122899 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services
I0110 13:42:17.461922  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.462550  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.462780  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.462891  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.462830  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.462993  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.463046  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.463345  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.463423  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.463571  122899 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.463667  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.463689  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.463722  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.463760  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.464167  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.464259  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.464277  122899 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0110 13:42:17.464312  122899 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0110 13:42:17.476702  122899 master.go:408] Skipping disabled API group "auditregistration.k8s.io".
I0110 13:42:17.476729  122899 master.go:416] Enabling API group "authentication.k8s.io".
I0110 13:42:17.476744  122899 master.go:416] Enabling API group "authorization.k8s.io".
I0110 13:42:17.476908  122899 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.477007  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.477024  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.477058  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.477099  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.477391  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.477437  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.477582  122899 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 13:42:17.477663  122899 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 13:42:17.477750  122899 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.477822  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.477835  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.477903  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.477940  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.478165  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.478209  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.478265  122899 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 13:42:17.478391  122899 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.478445  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.478456  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.478482  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.478517  122899 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 13:42:17.478715  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.478926  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.479009  122899 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0110 13:42:17.479077  122899 master.go:416] Enabling API group "autoscaling".
I0110 13:42:17.479210  122899 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.479299  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.479327  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.479354  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.479419  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.479441  122899 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0110 13:42:17.479632  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.479959  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.480032  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.480521  122899 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs
I0110 13:42:17.480595  122899 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0110 13:42:17.480653  122899 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.480717  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.480738  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.480780  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.480817  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.481072  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.481156  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.481205  122899 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0110 13:42:17.481249  122899 master.go:416] Enabling API group "batch".
I0110 13:42:17.481938  122899 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.481266  122899 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0110 13:42:17.482025  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.482126  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.482170  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.482213  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.482429  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.482519  122899 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0110 13:42:17.482539  122899 master.go:416] Enabling API group "certificates.k8s.io".
I0110 13:42:17.482680  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.482685  122899 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.482746  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.482755  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.482781  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.482823  122899 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0110 13:42:17.483001  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.483236  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.483314  122899 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0110 13:42:17.483324  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.483378  122899 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0110 13:42:17.483420  122899 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.483490  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.483512  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.483552  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.483646  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.483819  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.483935  122899 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0110 13:42:17.483958  122899 master.go:416] Enabling API group "coordination.k8s.io".
I0110 13:42:17.484072  122899 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.484143  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.484149  122899 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0110 13:42:17.484158  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.484186  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.484086  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.484251  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.484780  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.484900  122899 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//replicationcontrollers
I0110 13:42:17.485013  122899 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.485094  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.485116  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.485166  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.485304  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.485342  122899 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/replicationcontrollers
I0110 13:42:17.485480  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.486115  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.486252  122899 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 13:42:17.486281  122899 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 13:42:17.486384  122899 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.486459  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.486483  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.486727  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.486253  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.486794  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.487057  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.487123  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.487187  122899 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 13:42:17.487235  122899 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 13:42:17.487330  122899 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.487408  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.487432  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.487459  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.487511  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.487796  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.487934  122899 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingresses
I0110 13:42:17.487962  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.487997  122899 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingresses
I0110 13:42:17.488058  122899 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.488131  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.488156  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.488205  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.488257  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.488444  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.488553  122899 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0110 13:42:17.488703  122899 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.488740  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.488780  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.488803  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.488876  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.488917  122899 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0110 13:42:17.488934  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.489121  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.489178  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.489246  122899 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 13:42:17.489374  122899 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.489453  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.489471  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.489496  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.489572  122899 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 13:42:17.489771  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.490380  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.490497  122899 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0110 13:42:17.490522  122899 master.go:416] Enabling API group "extensions".
I0110 13:42:17.490534  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.490564  122899 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0110 13:42:17.490650  122899 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.490727  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.490750  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.490779  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.490844  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.491871  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.491972  122899 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0110 13:42:17.491990  122899 master.go:416] Enabling API group "networking.k8s.io".
I0110 13:42:17.492000  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.492110  122899 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.492131  122899 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0110 13:42:17.492187  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.492199  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.492226  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.492290  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.492536  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.492644  122899 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0110 13:42:17.492745  122899 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.492775  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.492800  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.492809  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.492833  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.493052  122899 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0110 13:42:17.493170  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.493371  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.493419  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.493522  122899 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicies
I0110 13:42:17.493549  122899 master.go:416] Enabling API group "policy".
I0110 13:42:17.493578  122899 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicies
I0110 13:42:17.493590  122899 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.493676  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.493687  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.493712  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.493757  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.494053  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.494098  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.494148  122899 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0110 13:42:17.494252  122899 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0110 13:42:17.494262  122899 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.494319  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.494326  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.494564  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.494665  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.494931  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.495029  122899 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0110 13:42:17.495060  122899 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.495116  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.495152  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.495181  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.495212  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.495252  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.495254  122899 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0110 13:42:17.495464  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.495559  122899 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0110 13:42:17.495659  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.495736  122899 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.495820  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.495834  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.495884  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.495888  122899 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0110 13:42:17.495963  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.496182  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.496284  122899 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0110 13:42:17.496327  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.496333  122899 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.496374  122899 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0110 13:42:17.496408  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.496430  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.496467  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.496539  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.496775  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.496918  122899 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0110 13:42:17.497060  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.497089  122899 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0110 13:42:17.497090  122899 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.497230  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.497265  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.497305  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.497373  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.497630  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.497789  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.497837  122899 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0110 13:42:17.497916  122899 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0110 13:42:17.497916  122899 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.498001  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.498053  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.498115  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.498285  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.498479  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.498550  122899 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0110 13:42:17.498572  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.498682  122899 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.498741  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.498753  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.498776  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.498948  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.499179  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.499316  122899 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0110 13:42:17.499347  122899 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0110 13:42:17.499380  122899 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0110 13:42:17.499463  122899 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0110 13:42:17.499356  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.501170  122899 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.501267  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.501289  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.501329  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.501391  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.501766  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.501873  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.501873  122899 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0110 13:42:17.501940  122899 master.go:416] Enabling API group "scheduling.k8s.io".
I0110 13:42:17.501891  122899 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0110 13:42:17.502004  122899 master.go:408] Skipping disabled API group "settings.k8s.io".
I0110 13:42:17.502155  122899 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.502252  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.502274  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.502369  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.502420  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.502752  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.502898  122899 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0110 13:42:17.502936  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.502937  122899 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.502990  122899 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0110 13:42:17.503005  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.503017  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.503054  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.503140  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.503360  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.503467  122899 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0110 13:42:17.503583  122899 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.503678  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.503703  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.503732  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.503830  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.503888  122899 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0110 13:42:17.504016  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.504548  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.504676  122899 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0110 13:42:17.504759  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.504744  122899 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.504831  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.505020  122899 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0110 13:42:17.505039  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.505214  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.505320  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.505585  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.505698  122899 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0110 13:42:17.505726  122899 master.go:416] Enabling API group "storage.k8s.io".
I0110 13:42:17.505806  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.505845  122899 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.505914  122899 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0110 13:42:17.505938  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.505950  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.505987  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.506039  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.506255  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.506394  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.506453  122899 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 13:42:17.506529  122899 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 13:42:17.506599  122899 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.507248  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.507271  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.507312  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.507500  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.508119  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.508252  122899 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 13:42:17.508318  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.508453  122899 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 13:42:17.508954  122899 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.509024  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.509047  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.509077  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.509128  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.509681  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.509768  122899 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 13:42:17.509771  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.509803  122899 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 13:42:17.510257  122899 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.510393  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.510435  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.510482  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.510542  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.511004  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.511063  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.511119  122899 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 13:42:17.511190  122899 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 13:42:17.511234  122899 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.511308  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.511332  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.511359  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.511418  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.511956  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.512120  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.512121  122899 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 13:42:17.512206  122899 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 13:42:17.512828  122899 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.512952  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.512974  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.513004  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.513063  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.513294  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.513343  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.513419  122899 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 13:42:17.513515  122899 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 13:42:17.513533  122899 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.513613  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.513637  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.513677  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.513721  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.513972  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.514105  122899 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 13:42:17.514130  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.514216  122899 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 13:42:17.514288  122899 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.514365  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.514638  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.514744  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.515113  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.515378  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.515412  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.515465  122899 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 13:42:17.515568  122899 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 13:42:17.515649  122899 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.515776  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.516229  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.516281  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.516371  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.516633  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.516681  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.516750  122899 store.go:1414] Monitoring deployments.apps count at <storage-prefix>//deployments
I0110 13:42:17.516823  122899 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0110 13:42:17.516895  122899 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.516973  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.516987  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.517102  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.517187  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.517485  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.517538  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.517594  122899 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0110 13:42:17.517650  122899 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0110 13:42:17.517749  122899 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.518033  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.518086  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.518127  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.518171  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.518380  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.518536  122899 store.go:1414] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0110 13:42:17.518701  122899 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.518774  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.518795  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.518835  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.518951  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.519018  122899 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0110 13:42:17.519198  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.519488  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.519587  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.519590  122899 store.go:1414] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0110 13:42:17.519621  122899 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0110 13:42:17.520194  122899 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.520304  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.520368  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.520428  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.520552  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.520767  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.520822  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.520884  122899 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0110 13:42:17.520902  122899 master.go:416] Enabling API group "apps".
I0110 13:42:17.520909  122899 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0110 13:42:17.520932  122899 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.520988  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.520998  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.521024  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.521063  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.522000  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.522060  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.522177  122899 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0110 13:42:17.522389  122899 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0110 13:42:17.522389  122899 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.522476  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.522529  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.522576  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.522647  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.523024  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.523065  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.523119  122899 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0110 13:42:17.523146  122899 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0110 13:42:17.523175  122899 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"08ffe99f-2af0-45dd-aae0-29f30586b4c0", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0110 13:42:17.523197  122899 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0110 13:42:17.523340  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:17.523368  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:17.523425  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:17.523917  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.525803  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:17.525843  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:17.525899  122899 store.go:1414] Monitoring events count at <storage-prefix>//events
I0110 13:42:17.525926  122899 master.go:416] Enabling API group "events.k8s.io".
W0110 13:42:17.530355  122899 genericapiserver.go:334] Skipping API batch/v2alpha1 because it has no resources.
W0110 13:42:17.540352  122899 genericapiserver.go:334] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0110 13:42:17.540922  122899 genericapiserver.go:334] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0110 13:42:17.542570  122899 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0110 13:42:17.552209  122899 genericapiserver.go:334] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
I0110 13:42:17.554183  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:17.554210  122899 healthz.go:170] healthz check poststarthook/bootstrap-controller failed: not finished
I0110 13:42:17.554219  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:17.554233  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:17.554241  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:17.554427  122899 wrap.go:47] GET /healthz: (356.741µs) 500
goroutine 62228 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0118cb730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0118cb730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc00981eda0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc0025bd928, 0xc016cc8000, 0x18a, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41b00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc0025bd928, 0xc009b41b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011c44480, 0xc008e53720, 0x604d680, 0xc0025bd928, 0xc009b41b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[-]poststarthook/bootstrap-controller failed: reason withheld\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40322]
I0110 13:42:17.556165  122899 wrap.go:47] GET /api/v1/services: (1.075859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.559830  122899 wrap.go:47] GET /api/v1/services: (1.045474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.562822  122899 wrap.go:47] GET /api/v1/namespaces/default: (1.119617ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.569573  122899 wrap.go:47] POST /api/v1/namespaces: (6.342891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.570975  122899 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (981.999µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.574740  122899 wrap.go:47] POST /api/v1/namespaces/default/services: (3.285554ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.576018  122899 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (904.786µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.577916  122899 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (1.474145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.579235  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (876.656µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.580169  122899 wrap.go:47] GET /api/v1/services: (926.151µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:17.580813  122899 wrap.go:47] POST /api/v1/namespaces: (1.211458ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40328]
I0110 13:42:17.580823  122899 wrap.go:47] GET /api/v1/services: (1.300112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40322]
I0110 13:42:17.580927  122899 wrap.go:47] GET /api/v1/namespaces/default: (2.160482ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40324]
I0110 13:42:17.582367  122899 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.079411ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:17.582996  122899 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.58188ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:17.584058  122899 wrap.go:47] POST /api/v1/namespaces: (1.268387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:17.584296  122899 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (893.063µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:17.585279  122899 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (850.05µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:17.586845  122899 wrap.go:47] POST /api/v1/namespaces: (1.277185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:17.655214  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:17.655286  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:17.655308  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:17.655327  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:17.655527  122899 wrap.go:47] GET /healthz: (407.559µs) 500
goroutine 62312 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016d00c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016d00c40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0115512c0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc0068a5858, 0xc00a8cf200, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba200)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc0068a5858, 0xc016dba200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e60a80, 0xc008e53720, 0x604d680, 0xc0068a5858, 0xc016dba200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:17.755208  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:17.755246  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:17.755256  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:17.755264  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:17.755399  122899 wrap.go:47] GET /healthz: (334.835µs) 500
goroutine 62314 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016d00d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016d00d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0115513c0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc0068a5880, 0xc00a8cf800, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba800)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc0068a5880, 0xc016dba800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e60c00, 0xc008e53720, 0x604d680, 0xc0068a5880, 0xc016dba800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:17.855163  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:17.855192  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:17.855201  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:17.855207  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:17.855349  122899 wrap.go:47] GET /healthz: (301.359µs) 500
goroutine 62316 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016d00e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016d00e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011551620, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc0068a5888, 0xc00a8cfe00, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbac00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc0068a5888, 0xc016dbac00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e60d20, 0xc008e53720, 0x604d680, 0xc0068a5888, 0xc016dbac00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:17.955259  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:17.955290  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:17.955298  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:17.955304  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:17.955463  122899 wrap.go:47] GET /healthz: (376.887µs) 500
goroutine 62324 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016dae3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016dae3f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0114e18e0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc006307178, 0xc001fed680, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc006307178, 0xc012ef5200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc006307178, 0xc012ef5100)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc006307178, 0xc012ef5100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011c29b60, 0xc008e53720, 0x604d680, 0xc006307178, 0xc012ef5100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:18.055250  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:18.055288  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.055299  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:18.055306  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:18.055459  122899 wrap.go:47] GET /healthz: (348.411µs) 500
goroutine 62326 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016dae4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016dae4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0114e1a20, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc006307188, 0xc001fedc80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc006307188, 0xc012ef5600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc006307188, 0xc012ef5500)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc006307188, 0xc012ef5500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011c29ce0, 0xc008e53720, 0x604d680, 0xc006307188, 0xc012ef5500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:18.155168  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:18.155205  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.155215  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:18.155235  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:18.155398  122899 wrap.go:47] GET /healthz: (343.565µs) 500
goroutine 62255 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d2ddab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d2ddab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0115883e0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00526d248, 0xc00104a300, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15d00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00526d248, 0xc00ee15d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011da4540, 0xc008e53720, 0x604d680, 0xc00526d248, 0xc00ee15d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:18.255281  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:18.255320  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.255331  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:18.255338  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:18.255498  122899 wrap.go:47] GET /healthz: (355.2µs) 500
goroutine 62257 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d2ddc00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d2ddc00, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0115886e0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00526d268, 0xc00104a900, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e100)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00526d268, 0xc016e3e100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011da4660, 0xc008e53720, 0x604d680, 0xc00526d268, 0xc016e3e100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:18.355205  122899 healthz.go:170] healthz check etcd failed: etcd client connection not yet established
I0110 13:42:18.355257  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.355280  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:18.355298  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:18.355507  122899 wrap.go:47] GET /healthz: (425.679µs) 500
goroutine 62339 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d2ddce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d2ddce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0115889e0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00526d2e0, 0xc00104ad80, 0x175, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e700)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00526d2e0, 0xc016e3e700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011da47e0, 0xc008e53720, 0x604d680, 0xc00526d2e0, 0xc016e3e700)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[-]etcd failed: reason withheld\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:18.450781  122899 clientconn.go:551] parsed scheme: ""
I0110 13:42:18.450820  122899 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0110 13:42:18.450904  122899 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0110 13:42:18.451086  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:18.451493  122899 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0110 13:42:18.451627  122899 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0110 13:42:18.455978  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.456000  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:18.456008  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:18.456158  122899 wrap.go:47] GET /healthz: (1.064161ms) 500
goroutine 62341 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d2dde30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d2dde30, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011588d40, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00526d388, 0xc00f8286e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ef00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ee00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00526d388, 0xc016e3ee00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011da4b40, 0xc008e53720, 0x604d680, 0xc00526d388, 0xc016e3ee00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40326]
I0110 13:42:18.556082  122899 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.695026ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:18.560256  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.789358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:18.560452  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (6.052194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.560647  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.560666  122899 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0110 13:42:18.560674  122899 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0110 13:42:18.560694  122899 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (4.199431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:18.560820  122899 wrap.go:47] GET /healthz: (5.21851ms) 500
goroutine 62376 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012ff9a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012ff9a40, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011646920, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc0093829d8, 0xc00f8291e0, 0x160, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc0093829d8, 0xc016ddea00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc0093829d8, 0xc016dde900)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc0093829d8, 0xc016dde900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e68ae0, 0xc008e53720, 0x604d680, 0xc0093829d8, 0xc016dde900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40354]
I0110 13:42:18.560923  122899 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0110 13:42:18.562878  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.781975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:18.562880  122899 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.741125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:18.563351  122899 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (2.25035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.564979  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.760133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:18.565334  122899 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.072915ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:18.565522  122899 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0110 13:42:18.565567  122899 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0110 13:42:18.565533  122899 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (1.783317ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.566175  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (802.536µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40326]
I0110 13:42:18.567576  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (918.676µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.568663  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (736.346µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.569692  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (751.258µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.570776  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (773.667µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.572058  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (948.665µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.573895  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.574140  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0110 13:42:18.574998  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (721.297µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.576998  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.641531ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.577248  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0110 13:42:18.578315  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (932.042µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.580578  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.835174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.580792  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0110 13:42:18.581845  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (790.14µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.583536  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.583873  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0110 13:42:18.584732  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (722.413µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.586501  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.351702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.586677  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0110 13:42:18.587663  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (765.28µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.589475  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.363614ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.589683  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0110 13:42:18.590806  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (884.254µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.592818  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.479496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.593057  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0110 13:42:18.594152  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (872.191µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.596201  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.596485  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0110 13:42:18.597410  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (719.48µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.599900  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.014914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.600201  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0110 13:42:18.611424  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (11.029579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.617390  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.275959ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.617699  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0110 13:42:18.618770  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (805.499µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.620967  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.796146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.621251  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0110 13:42:18.622379  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (899.241µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.624179  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.438596ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.624550  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0110 13:42:18.625662  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (947.298µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.627582  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.54385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.627813  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0110 13:42:18.628668  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (675.579µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.630509  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.407127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.630711  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0110 13:42:18.631664  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (776.742µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.633831  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.704202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.634145  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0110 13:42:18.635233  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (857.469µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.638058  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.530948ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.638372  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0110 13:42:18.639353  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (760.23µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.641148  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.406113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.641353  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0110 13:42:18.642287  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (724.117µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.644047  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.440768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.644254  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0110 13:42:18.645157  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (774.679µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.647369  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.801777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.647636  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0110 13:42:18.648568  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (737.208µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.650418  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.464031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.650617  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0110 13:42:18.651473  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (691.179µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.653793  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.943538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.654039  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0110 13:42:18.655027  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (752.209µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.655584  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.655911  122899 wrap.go:47] GET /healthz: (903.644µs) 500
goroutine 62480 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0170c8460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0170c8460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011c43000, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc016fa84c0, 0xc00fad1540, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045900)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc016fa84c0, 0xc017045900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012225500, 0xc008e53720, 0x604d680, 0xc016fa84c0, 0xc017045900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:18.656668  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.29782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.656981  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0110 13:42:18.658096  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (850.156µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.660295  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.542355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.660504  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0110 13:42:18.661583  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (865.354µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.663321  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.283744ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.663527  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0110 13:42:18.664502  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (758.184µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.666261  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.367962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.666495  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0110 13:42:18.667591  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (895.636µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.669362  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.252262ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.669705  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0110 13:42:18.670614  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (707.801µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.672345  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.321305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.672568  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0110 13:42:18.673485  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (710.235µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.675220  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.313719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.675471  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0110 13:42:18.676382  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (684.245µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.678724  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.26749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.679034  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0110 13:42:18.680976  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.591214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.683079  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.665621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.683397  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0110 13:42:18.684505  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (855.807µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.686577  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.514909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.686981  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0110 13:42:18.688015  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (817.32µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.690219  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.814884ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.690433  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0110 13:42:18.691500  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (811.547µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.693253  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.330006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.693451  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0110 13:42:18.694450  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (791.497µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.696130  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.213978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.696374  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0110 13:42:18.697252  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (737.375µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.699101  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.443711ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.699402  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0110 13:42:18.700353  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (746.641µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.702721  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.548395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.702975  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0110 13:42:18.704091  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (944.166µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.706154  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.622964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.706383  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0110 13:42:18.707392  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (821.153µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.709267  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.384069ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.709450  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0110 13:42:18.710430  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (768.643µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.712156  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.321619ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.712375  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0110 13:42:18.713392  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (824.821µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.715186  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.424243ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.715417  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0110 13:42:18.716374  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (747.912µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.718275  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.486796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.719967  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0110 13:42:18.721010  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (779.037µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.722960  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.463676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.723170  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0110 13:42:18.724055  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (741.895µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.726036  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.489769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.726265  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0110 13:42:18.727566  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (933.614µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.729446  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.463057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.729692  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0110 13:42:18.731737  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.82878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.746079  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (13.685457ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.747216  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0110 13:42:18.750786  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (2.139612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.758089  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.620847ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.758506  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.759645  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0110 13:42:18.760196  122899 wrap.go:47] GET /healthz: (4.299039ms) 500
goroutine 62280 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc016d107e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc016d107e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0114b8440, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc0025bdcf0, 0xc0173d8000, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7c00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc0025bdcf0, 0xc016ce7c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011c45ce0, 0xc008e53720, 0x604d680, 0xc0025bdcf0, 0xc016ce7c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:18.762401  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.690764ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.767532  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.084636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.773351  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0110 13:42:18.774555  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (975.716µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.777189  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.095584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.777423  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0110 13:42:18.778767  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.132662ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.784295  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.09265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.784754  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0110 13:42:18.785838  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (774.659µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.787905  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.477494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.788112  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0110 13:42:18.789183  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (835.457µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.792355  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.757249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.792633  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0110 13:42:18.794308  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.456511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.796766  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.811381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.797060  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0110 13:42:18.823125  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.093511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.837297  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.764183ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.837715  122899 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0110 13:42:18.856244  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.856493  122899 wrap.go:47] GET /healthz: (1.460245ms) 500
goroutine 62590 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc008162000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc008162000, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01246aa80, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00cee6070, 0xc004b16280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6300)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00cee6070, 0xc0136c6300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00d27a600, 0xc008e53720, 0x604d680, 0xc00cee6070, 0xc0136c6300)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:18.857674  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (3.225529ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.886063  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (11.627121ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.887005  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0110 13:42:18.902491  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (4.173563ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.916760  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.114107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.917002  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0110 13:42:18.935884  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.352321ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.956577  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:18.956783  122899 wrap.go:47] GET /healthz: (1.270613ms) 500
goroutine 62636 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00fcbc4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00fcbc4d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0131f84a0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc014f12310, 0xc001e74280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd000)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc014f12310, 0xc00e4fd000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0126ee0c0, 0xc008e53720, 0x604d680, 0xc014f12310, 0xc00e4fd000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:18.957586  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.141593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.957874  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0110 13:42:18.975696  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.287958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:18.996945  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.522988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.009151  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0110 13:42:19.015565  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.133575ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.036691  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.307772ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.037182  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0110 13:42:19.056247  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.795108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.056645  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.056800  122899 wrap.go:47] GET /healthz: (1.029054ms) 500
goroutine 62673 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00108f260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00108f260, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0132a8220, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00057e758, 0xc001c5c640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00057e758, 0xc015e49a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00057e758, 0xc015e49900)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00057e758, 0xc015e49900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0125d2300, 0xc008e53720, 0x604d680, 0xc00057e758, 0xc015e49900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:19.076548  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.121558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.076814  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0110 13:42:19.095721  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.282485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.116846  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.437814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.117102  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0110 13:42:19.135586  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.204133ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.156127  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.156311  122899 wrap.go:47] GET /healthz: (1.281567ms) 500
goroutine 62674 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0159702a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0159702a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012f5e9a0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000a4e878, 0xc004ac8280, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80900)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000a4e878, 0xc015d80900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01242a3c0, 0xc008e53720, 0x604d680, 0xc000a4e878, 0xc015d80900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40330]
I0110 13:42:19.156423  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.997737ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.156628  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0110 13:42:19.175783  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.366217ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.196214  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.790618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.196421  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0110 13:42:19.216069  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.619278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.237297  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.808331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.237535  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0110 13:42:19.255933  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.476722ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.256347  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.256513  122899 wrap.go:47] GET /healthz: (861.021µs) 500
goroutine 62723 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015e01500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015e01500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0130dd7a0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc0139628e0, 0xc0000768c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79900)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc0139628e0, 0xc012b79900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0123e4a80, 0xc008e53720, 0x604d680, 0xc0139628e0, 0xc012b79900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40330]
I0110 13:42:19.276543  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.089813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.276834  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0110 13:42:19.295814  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.314687ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.316452  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.008853ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.316781  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0110 13:42:19.335540  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.10884ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.355996  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.356169  122899 wrap.go:47] GET /healthz: (1.175967ms) 500
goroutine 62679 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc015970e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc015970e70, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0130ca920, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000a4ed38, 0xc004b16780, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81d00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000a4ed38, 0xc015d81d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01242ad20, 0xc008e53720, 0x604d680, 0xc000a4ed38, 0xc015d81d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:19.356299  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.858097ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.356516  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0110 13:42:19.375736  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.274933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.396502  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.101689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.396779  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0110 13:42:19.415517  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.071052ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.436683  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.144422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.436917  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0110 13:42:19.456002  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.456163  122899 wrap.go:47] GET /healthz: (1.081624ms) 500
goroutine 62738 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0132a0310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0132a0310, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012cbeee0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000598a50, 0xc004ac88c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d200)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000598a50, 0xc00c88d200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0126f5f20, 0xc008e53720, 0x604d680, 0xc000598a50, 0xc00c88d200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:19.456458  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.766374ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.476131  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.670162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.476362  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0110 13:42:19.495778  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.298793ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.516470  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.967249ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.516736  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0110 13:42:19.535732  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.319001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.555973  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.556174  122899 wrap.go:47] GET /healthz: (1.162326ms) 500
goroutine 62754 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013576d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013576d20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01252f120, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc014f13448, 0xc000076dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc014f13448, 0xc00c366700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc014f13448, 0xc00c366500)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc014f13448, 0xc00c366500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0123d1140, 0xc008e53720, 0x604d680, 0xc014f13448, 0xc00c366500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40330]
I0110 13:42:19.556526  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.050964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.556755  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0110 13:42:19.575626  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.170354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.596394  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.901877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.596651  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0110 13:42:19.615585  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.122341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.636496  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.054298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.636766  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0110 13:42:19.655719  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.655794  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.344558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.655902  122899 wrap.go:47] GET /healthz: (905.797µs) 500
goroutine 62775 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013144700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013144700, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012190680, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000a4f318, 0xc000077400, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab63100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab62f00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000a4f318, 0xc00ab62f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012119bc0, 0xc008e53720, 0x604d680, 0xc000a4f318, 0xc00ab62f00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40330]
I0110 13:42:19.676139  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.687106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.676377  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0110 13:42:19.695675  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.214724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.716330  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.809675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.716565  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0110 13:42:19.735557  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.078144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.756267  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.756344  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.890791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.756479  122899 wrap.go:47] GET /healthz: (1.455471ms) 500
goroutine 62748 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0132a1500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0132a1500, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0121021a0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000598ef8, 0xc0025923c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000598ef8, 0xc009758c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000598ef8, 0xc009758b00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000598ef8, 0xc009758b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0123318c0, 0xc008e53720, 0x604d680, 0xc000598ef8, 0xc009758b00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:19.756692  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0110 13:42:19.775765  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.263451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.796211  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.803674ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.796448  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0110 13:42:19.815568  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.126068ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.836505  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.836749  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0110 13:42:19.855688  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.855799  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.325339ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:19.855886  122899 wrap.go:47] GET /healthz: (852.75µs) 500
goroutine 62752 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0132a19d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0132a19d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012103120, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000599008, 0xc0025928c0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000599008, 0xc00789c600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000599008, 0xc00789c500)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000599008, 0xc00789c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012331f80, 0xc008e53720, 0x604d680, 0xc000599008, 0xc00789c500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40330]
I0110 13:42:19.876282  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.877417ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.876560  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0110 13:42:19.895949  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.403284ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.916487  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.032642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.916734  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0110 13:42:19.935566  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.154104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.955744  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:19.955940  122899 wrap.go:47] GET /healthz: (917.469µs) 500
goroutine 62802 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0078e7490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0078e7490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01203c9a0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc014552830, 0xc001c5cb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc014552830, 0xc007944d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc014552830, 0xc007944c00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc014552830, 0xc007944c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01290d560, 0xc008e53720, 0x604d680, 0xc014552830, 0xc007944c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:19.956271  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.839449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.956508  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0110 13:42:19.975463  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.07273ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.996211  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.782048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:19.996450  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0110 13:42:20.015459  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.077826ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.036165  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.719605ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.036385  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0110 13:42:20.055524  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.125603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.055551  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:20.055732  122899 wrap.go:47] GET /healthz: (738.035µs) 500
goroutine 62834 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00d2785b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00d2785b0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc012015440, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00cee6f20, 0xc001c5cf00, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8200)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00cee6f20, 0xc0093c8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fa20c0, 0xc008e53720, 0x604d680, 0xc00cee6f20, 0xc0093c8200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:20.076280  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.823114ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:20.076527  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0110 13:42:20.095588  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.150719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:20.116273  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.85561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:20.116493  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0110 13:42:20.135392  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.005017ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:20.156194  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.70738ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:20.156202  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:20.156356  122899 wrap.go:47] GET /healthz: (1.077811ms) 500
goroutine 62807 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0078e7ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0078e7ce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc01203de60, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc014552af8, 0xc002592dc0, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc014552af8, 0xc007945f00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc014552af8, 0xc007945e00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc014552af8, 0xc007945e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e60b40, 0xc008e53720, 0x604d680, 0xc014552af8, 0xc007945e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40330]
I0110 13:42:20.156419  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0110 13:42:20.175677  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.241785ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.196217  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.853629ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.196452  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0110 13:42:20.215482  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.068349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.236572  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.001328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.236873  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0110 13:42:20.255695  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:20.255884  122899 wrap.go:47] GET /healthz: (868.969µs) 500
goroutine 62785 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc013145880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc013145880, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011f41640, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000a4faf8, 0xc002593180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f5000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f4c00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000a4faf8, 0xc0079f4c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011e69920, 0xc008e53720, 0x604d680, 0xc000a4faf8, 0xc0079f4c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:20.255960  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.447646ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.276177  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.756126ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.276439  122899 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0110 13:42:20.295789  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.355045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.297394  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.203388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.316135  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.718719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.316410  122899 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0110 13:42:20.335518  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.134408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.336996  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.053612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.355563  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:20.355738  122899 wrap.go:47] GET /healthz: (787.607µs) 500
goroutine 62885 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00caee380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00caee380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011e5c3e0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000a4fbc8, 0xc00553c640, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08c00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000a4fbc8, 0xc003d08c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011cac060, 0xc008e53720, 0x604d680, 0xc000a4fbc8, 0xc003d08c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:20.356553  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.077538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.356828  122899 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0110 13:42:20.375425  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.053462ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.376889  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.032428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.396103  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.695558ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.396328  122899 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0110 13:42:20.415552  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.136084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.417233  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.130767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.436089  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.691002ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.436331  122899 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0110 13:42:20.455290  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (900.693µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.455716  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:20.455833  122899 wrap.go:47] GET /healthz: (882.416µs) 500
goroutine 62899 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c38b810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c38b810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011daf4a0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc0145532f8, 0xc001c5d680, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9a00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc0145532f8, 0xc0017e9a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011c66600, 0xc008e53720, 0x604d680, 0xc0145532f8, 0xc0017e9a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:20.456838  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.14553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.476322  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.916489ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.476596  122899 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0110 13:42:20.495594  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.130474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.497267  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.289642ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.516173  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.704283ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.516439  122899 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0110 13:42:20.535417  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (967.945µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.537042  122899 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.206174ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.555697  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:20.555885  122899 wrap.go:47] GET /healthz: (841.329µs) 500
goroutine 62895 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00caefce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00caefce0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011d81a40, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00993c0c8, 0xc004b17180, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007ded00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007dec00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00993c0c8, 0xc0007dec00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011cad800, 0xc008e53720, 0x604d680, 0xc00993c0c8, 0xc0007dec00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:20.556279  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.89516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.556560  122899 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0110 13:42:20.575568  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.159679ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.577367  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.360863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.596275  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.903157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.596518  122899 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0110 13:42:20.615364  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (965.467µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.616955  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.183324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.636103  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.711434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.636377  122899 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0110 13:42:20.655531  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.108907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.655639  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:20.655830  122899 wrap.go:47] GET /healthz: (833.196µs) 500
goroutine 62827 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc012f55180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc012f55180, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011f972e0, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc000599360, 0xc00553cb40, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc000599360, 0xc002736000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc000599360, 0xc003767e00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc000599360, 0xc003767e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011fbde60, 0xc008e53720, 0x604d680, 0xc000599360, 0xc003767e00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:20.657150  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.196508ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.676142  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.715304ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.676391  122899 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0110 13:42:20.695675  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.264834ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.697389  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.217714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.716160  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.73483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.716405  122899 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0110 13:42:20.735324  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (927.417µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.737054  122899 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.208526ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.755659  122899 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0110 13:42:20.755815  122899 wrap.go:47] GET /healthz: (798.027µs) 500
goroutine 62920 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc00c0d8a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc00c0d8a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc011869300, 0x1f4)
net/http.Error(0x7f50f0a15cc0, 0xc00526ca98, 0xc00553d040, 0x136, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
net/http.HandlerFunc.ServeHTTP(0xc009ca5dc0, 0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01522ce40, 0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x40e9527, 0xe, 0xc00e139440, 0xc00d21fc00, 0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4700, 0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
net/http.HandlerFunc.ServeHTTP(0xc0025770e0, 0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
net/http.HandlerFunc.ServeHTTP(0xc0118e4740, 0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80c00)
net/http.HandlerFunc.ServeHTTP(0xc013b2e460, 0x7f50f0a15cc0, 0xc00526ca98, 0xc000c80c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011c29a40, 0xc008e53720, 0x604d680, 0xc00526ca98, 0xc000c80c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\nhealthz check failed\n"
 [Go-http-client/1.1 127.0.0.1:40352]
I0110 13:42:20.756319  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.946925ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.756542  122899 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0110 13:42:20.775385  122899 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.014249ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.776999  122899 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.187123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.796125  122899 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.730845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.796334  122899 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0110 13:42:20.856006  122899 wrap.go:47] GET /healthz: (880.597µs) 200 [Go-http-client/1.1 127.0.0.1:40330]
W0110 13:42:20.856621  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856677  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856729  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856746  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856760  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856771  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856781  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856794  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856805  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:20.856817  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0110 13:42:20.856976  122899 factory.go:745] Creating scheduler from algorithm provider 'DefaultProvider'
I0110 13:42:20.856997  122899 factory.go:826] Creating scheduler with fit predicates 'map[CheckNodePIDPressure:{} MaxCSIVolumeCountPred:{} GeneralPredicates:{} CheckNodeDiskPressure:{} PodToleratesNodeTaints:{} NoVolumeZoneConflict:{} MaxGCEPDVolumeCount:{} MaxAzureDiskVolumeCount:{} CheckNodeMemoryPressure:{} MaxEBSVolumeCount:{} MatchInterPodAffinity:{} NoDiskConflict:{} CheckNodeCondition:{} CheckVolumeBinding:{}]' and priority functions 'map[ImageLocalityPriority:{} SelectorSpreadPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} BalancedResourceAllocation:{} NodePreferAvoidPodsPriority:{} NodeAffinityPriority:{} TaintTolerationPriority:{}]'
I0110 13:42:20.857118  122899 controller_utils.go:1021] Waiting for caches to sync for scheduler controller
I0110 13:42:20.857401  122899 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0110 13:42:20.857429  122899 reflector.go:169] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:194
I0110 13:42:20.858388  122899 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (674.808µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:20.859270  122899 get.go:251] Starting watch for /api/v1/pods, rv=23151 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=6m49s
I0110 13:42:20.957353  122899 shared_informer.go:123] caches populated
I0110 13:42:20.957387  122899 controller_utils.go:1028] Caches are synced for scheduler controller
I0110 13:42:20.957763  122899 reflector.go:131] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.957794  122899 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.957822  122899 reflector.go:131] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.957845  122899 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.958298  122899 reflector.go:131] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.958320  122899 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.958592  122899 reflector.go:131] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.958630  122899 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.958649  122899 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.958660  122899 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.959001  122899 reflector.go:131] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.959014  122899 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.959235  122899 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (1.159119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:20.959261  122899 reflector.go:131] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.959279  122899 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.959578  122899 reflector.go:131] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.959591  122899 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.959692  122899 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (376.83µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 13:42:20.959755  122899 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (422.832µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 13:42:20.960097  122899 get.go:251] Starting watch for /api/v1/nodes, rv=23151 labels= fields= timeout=6m12s
I0110 13:42:20.960104  122899 reflector.go:131] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.960156  122899 reflector.go:169] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:132
I0110 13:42:20.960165  122899 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (338.614µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:20.960173  122899 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (397.987µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 13:42:20.960500  122899 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (250.307µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40422]
I0110 13:42:20.960804  122899 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (1.657787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40420]
I0110 13:42:20.960914  122899 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=23151 labels= fields= timeout=8m32s
I0110 13:42:20.961178  122899 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=23151 labels= fields= timeout=7m33s
I0110 13:42:20.961348  122899 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (287.876µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40352]
I0110 13:42:20.961621  122899 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=23151 labels= fields= timeout=8m18s
I0110 13:42:20.961662  122899 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=23151 labels= fields= timeout=9m13s
I0110 13:42:20.961970  122899 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=23151 labels= fields= timeout=5m32s
I0110 13:42:20.962063  122899 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=23151 labels= fields= timeout=7m20s
I0110 13:42:20.962149  122899 get.go:251] Starting watch for /api/v1/services, rv=23156 labels= fields= timeout=9m10s
I0110 13:42:20.962225  122899 wrap.go:47] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (1.639004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 13:42:20.962735  122899 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=23151 labels= fields= timeout=6m25s
I0110 13:42:21.057697  122899 shared_informer.go:123] caches populated
I0110 13:42:21.157950  122899 shared_informer.go:123] caches populated
I0110 13:42:21.258188  122899 shared_informer.go:123] caches populated
I0110 13:42:21.358395  122899 shared_informer.go:123] caches populated
I0110 13:42:21.458628  122899 shared_informer.go:123] caches populated
I0110 13:42:21.558892  122899 shared_informer.go:123] caches populated
I0110 13:42:21.659098  122899 shared_informer.go:123] caches populated
I0110 13:42:21.759335  122899 shared_informer.go:123] caches populated
I0110 13:42:21.859554  122899 shared_informer.go:123] caches populated
I0110 13:42:21.959782  122899 shared_informer.go:123] caches populated
I0110 13:42:21.959782  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
W0110 13:42:21.959943  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:21.959994  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:21.960026  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:21.960054  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:21.960081  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0110 13:42:21.960104  122899 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0110 13:42:21.960444  122899 reflector.go:131] Starting reflector *v1.StatefulSet (12h0m0s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.960476  122899 reflector.go:169] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.960493  122899 reflector.go:131] Starting reflector *v1.ReplicaSet (12h0m0s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.960505  122899 reflector.go:131] Starting reflector *v1beta1.PodDisruptionBudget (30s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.960518  122899 reflector.go:169] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.960525  122899 reflector.go:169] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.960694  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:21.961150  122899 reflector.go:131] Starting reflector *v1.ReplicationController (12h0m0s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.961180  122899 reflector.go:169] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.961477  122899 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (469.686µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40452]
I0110 13:42:21.961512  122899 wrap.go:47] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (475.514µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40446]
I0110 13:42:21.961477  122899 wrap.go:47] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (465.205µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40448]
I0110 13:42:21.961838  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:21.962051  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:21.962184  122899 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=23151 labels= fields= timeout=5m23s
I0110 13:42:21.962014  122899 wrap.go:47] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (514.086µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40454]
I0110 13:42:21.962196  122899 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=23151 labels= fields= timeout=7m39s
I0110 13:42:21.962219  122899 reflector.go:131] Starting reflector *v1.Deployment (12h0m0s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.962234  122899 reflector.go:169] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.962335  122899 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=23151 labels= fields= timeout=8m19s
I0110 13:42:21.962660  122899 reflector.go:131] Starting reflector *v1.Pod (12h0m0s) from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.962678  122899 reflector.go:169] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:132
I0110 13:42:21.962934  122899 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=23151 labels= fields= timeout=9m13s
I0110 13:42:21.963152  122899 wrap.go:47] GET /apis/apps/v1/deployments?limit=500&resourceVersion=0: (709.412µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40454]
I0110 13:42:21.963162  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:21.963440  122899 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (411.131µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40460]
I0110 13:42:21.964181  122899 get.go:251] Starting watch for /api/v1/pods, rv=23151 labels= fields= timeout=8m15s
I0110 13:42:21.970137  122899 get.go:251] Starting watch for /apis/apps/v1/deployments, rv=23151 labels= fields= timeout=9m18s
I0110 13:42:22.060316  122899 shared_informer.go:123] caches populated
I0110 13:42:22.160532  122899 shared_informer.go:123] caches populated
I0110 13:42:22.260764  122899 shared_informer.go:123] caches populated
I0110 13:42:22.360962  122899 shared_informer.go:123] caches populated
I0110 13:42:22.461181  122899 shared_informer.go:123] caches populated
I0110 13:42:22.561414  122899 shared_informer.go:123] caches populated
I0110 13:42:22.561661  122899 disruption.go:286] Starting disruption controller
I0110 13:42:22.561681  122899 controller_utils.go:1021] Waiting for caches to sync for disruption controller
I0110 13:42:22.564684  122899 wrap.go:47] POST /api/v1/nodes: (2.54476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.567200  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (2.01843ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.567538  122899 disruption.go:326] addPod called on pod "low-pod1"
I0110 13:42:22.567633  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.567659  122899 disruption.go:329] No matching pdb for pod "low-pod1"
I0110 13:42:22.567662  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod1
I0110 13:42:22.567672  122899 scheduler.go:454] Attempting to schedule pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod1
I0110 13:42:22.567799  122899 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod1", node "node-1"
I0110 13:42:22.567814  122899 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod1", node "node-1": all PVCs bound and nothing to do
I0110 13:42:22.567873  122899 factory.go:1166] Attempting to bind low-pod1 to node-1
I0110 13:42:22.569907  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1/binding: (1.812665ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.570402  122899 scheduler.go:569] pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod1 is bound successfully on node node-1, 1 nodes evaluated, 1 nodes were found feasible
I0110 13:42:22.570457  122899 disruption.go:338] updatePod called on pod "low-pod1"
I0110 13:42:22.570479  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.570484  122899 disruption.go:341] No matching pdb for pod "low-pod1"
I0110 13:42:22.572319  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (1.573532ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.661903  122899 shared_informer.go:123] caches populated
I0110 13:42:22.661955  122899 controller_utils.go:1028] Caches are synced for disruption controller
I0110 13:42:22.661965  122899 disruption.go:294] Sending events to api server.
I0110 13:42:22.669682  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.748311ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.671825  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.502219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.674424  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1/status: (2.039212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.674778  122899 disruption.go:338] updatePod called on pod "low-pod1"
I0110 13:42:22.674813  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.674820  122899 disruption.go:341] No matching pdb for pod "low-pod1"
I0110 13:42:22.676616  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.686694ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.676886  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod2
I0110 13:42:22.676907  122899 scheduler.go:454] Attempting to schedule pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod2
I0110 13:42:22.676955  122899 disruption.go:326] addPod called on pod "low-pod2"
I0110 13:42:22.676989  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.677004  122899 disruption.go:329] No matching pdb for pod "low-pod2"
I0110 13:42:22.677081  122899 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod2", node "node-1"
I0110 13:42:22.677101  122899 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod2", node "node-1": all PVCs bound and nothing to do
I0110 13:42:22.677174  122899 factory.go:1166] Attempting to bind low-pod2 to node-1
I0110 13:42:22.679075  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2/binding: (1.629543ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.679245  122899 scheduler.go:569] pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/low-pod2 is bound successfully on node node-1, 1 nodes evaluated, 1 nodes were found feasible
I0110 13:42:22.679397  122899 disruption.go:338] updatePod called on pod "low-pod2"
I0110 13:42:22.679440  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.679457  122899 disruption.go:341] No matching pdb for pod "low-pod2"
I0110 13:42:22.680973  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (1.493397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.779178  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2: (1.829835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.781108  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2: (1.436496ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.783760  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2/status: (2.177307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.784167  122899 disruption.go:338] updatePod called on pod "low-pod2"
I0110 13:42:22.784206  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.784251  122899 disruption.go:341] No matching pdb for pod "low-pod2"
I0110 13:42:22.786166  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.831911ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.786526  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/mid-pod3
I0110 13:42:22.786538  122899 scheduler.go:454] Attempting to schedule pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/mid-pod3
I0110 13:42:22.786553  122899 disruption.go:326] addPod called on pod "mid-pod3"
I0110 13:42:22.786567  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod3, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.786572  122899 disruption.go:329] No matching pdb for pod "mid-pod3"
I0110 13:42:22.786665  122899 scheduler_binder.go:211] AssumePodVolumes for pod "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/mid-pod3", node "node-1"
I0110 13:42:22.786678  122899 scheduler_binder.go:221] AssumePodVolumes for pod "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/mid-pod3", node "node-1": all PVCs bound and nothing to do
I0110 13:42:22.786715  122899 factory.go:1166] Attempting to bind mid-pod3 to node-1
I0110 13:42:22.788252  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3/binding: (1.332251ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.788450  122899 scheduler.go:569] pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/mid-pod3 is bound successfully on node node-1, 1 nodes evaluated, 1 nodes were found feasible
I0110 13:42:22.788513  122899 disruption.go:338] updatePod called on pod "mid-pod3"
I0110 13:42:22.788537  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod3, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.788543  122899 disruption.go:341] No matching pdb for pod "mid-pod3"
I0110 13:42:22.790071  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (1.353631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.888815  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3: (1.797672ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.890768  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3: (1.4093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.893239  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3/status: (1.972223ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:22.893407  122899 disruption.go:338] updatePod called on pod "mid-pod3"
I0110 13:42:22.893440  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod3, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:22.893468  122899 disruption.go:341] No matching pdb for pod "mid-pod3"
I0110 13:42:22.960066  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:22.960825  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:22.962026  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:22.962207  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:22.963344  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:23.895650  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.712553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:23.897550  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2: (1.386199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:23.899222  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3: (1.277133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:23.902638  122899 wrap.go:47] POST /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets: (2.689635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:23.902910  122899 disruption.go:307] add DB "pdb-1"
I0110 13:42:23.904828  122899 disruption.go:314] update DB "pdb-1"
I0110 13:42:23.905170  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.907484ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:23.905379  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (2.441852ms)
I0110 13:42:23.906889  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.259345ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:23.907075  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (1.669637ms)
I0110 13:42:23.960275  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:23.961248  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:23.962145  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:23.962374  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:23.963518  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:24.904779  122899 wrap.go:47] GET /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets: (1.532962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:24.907259  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.85074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:24.907536  122899 disruption.go:326] addPod called on pod "preemptor-pod"
I0110 13:42:24.907561  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:24.907568  122899 disruption.go:329] No matching pdb for pod "preemptor-pod"
I0110 13:42:24.907588  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:24.907617  122899 scheduler.go:454] Attempting to schedule pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:24.907751  122899 factory.go:1070] Unable to schedule preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu.; waiting
I0110 13:42:24.907799  122899 factory.go:1175] Updating pod condition for preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 13:42:24.909220  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.139315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:24.910206  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (1.876999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 13:42:24.911494  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod/status: (3.231138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40800]
I0110 13:42:24.911765  122899 disruption.go:338] updatePod called on pod "preemptor-pod"
I0110 13:42:24.911789  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:24.911817  122899 disruption.go:341] No matching pdb for pod "preemptor-pod"
I0110 13:42:24.912962  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.028501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 13:42:24.913268  122899 generic_scheduler.go:1108] Node node-1 is a potential node for preemption.
I0110 13:42:24.915502  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod/status: (1.819532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 13:42:24.915599  122899 disruption.go:338] updatePod called on pod "preemptor-pod"
I0110 13:42:24.915646  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:24.915662  122899 disruption.go:341] No matching pdb for pod "preemptor-pod"
I0110 13:42:24.918302  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3: (2.389481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 13:42:24.918403  122899 disruption.go:338] updatePod called on pod "mid-pod3"
I0110 13:42:24.918620  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod3, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:24.918644  122899 disruption.go:341] No matching pdb for pod "mid-pod3"
I0110 13:42:24.918741  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:24.918762  122899 scheduler.go:454] Attempting to schedule pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:24.918893  122899 factory.go:1070] Unable to schedule preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu.; waiting
I0110 13:42:24.918948  122899 factory.go:1175] Updating pod condition for preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 13:42:24.920192  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (1.272289ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 13:42:24.920661  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.147365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:24.920949  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod/status: (1.760387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:24.922342  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (940.72µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:24.923110  122899 wrap.go:47] PATCH /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events/preemptor-pod.1578807ca9614532: (2.38301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40802]
I0110 13:42:24.960482  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:24.961453  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:24.962246  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:24.962525  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:24.963679  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:25.910701  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3: (2.746426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:25.960735  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:25.961673  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:25.962803  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:25.962840  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:25.963871  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:26.014020  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.803078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:26.017828  122899 disruption.go:338] updatePod called on pod "low-pod1"
I0110 13:42:26.018412  122899 disruption.go:344] updatePod "low-pod1" -> PDB "pdb-1"
I0110 13:42:26.019518  122899 disruption.go:367] deletePod called on pod "low-pod1"
I0110 13:42:26.019550  122899 disruption.go:373] deletePod "low-pod1" -> PDB "pdb-1"
I0110 13:42:26.019555  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (5.072682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:26.020357  122899 disruption.go:314] update DB "pdb-1"
I0110 13:42:26.020429  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.702116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:26.020773  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (2.31969ms)
I0110 13:42:26.021994  122899 disruption.go:338] updatePod called on pod "low-pod2"
I0110 13:42:26.022023  122899 disruption.go:344] updatePod "low-pod2" -> PDB "pdb-1"
I0110 13:42:26.023375  122899 disruption.go:314] update DB "pdb-1"
I0110 13:42:26.023378  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (2.379582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:26.023563  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (2.766931ms)
I0110 13:42:26.025125  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2: (5.328383ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:26.025369  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.551663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:26.025590  122899 disruption.go:314] update DB "pdb-1"
I0110 13:42:26.025770  122899 disruption.go:367] deletePod called on pod "low-pod2"
I0110 13:42:26.025792  122899 disruption.go:373] deletePod "low-pod2" -> PDB "pdb-1"
I0110 13:42:26.025934  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (2.350019ms)
I0110 13:42:26.028167  122899 disruption.go:338] updatePod called on pod "mid-pod3"
I0110 13:42:26.028188  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod3, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:26.028193  122899 disruption.go:341] No matching pdb for pod "mid-pod3"
I0110 13:42:26.028421  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.891001ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:26.028499  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (1.83584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.028564  122899 disruption.go:314] update DB "pdb-1"
I0110 13:42:26.028570  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (2.611455ms)
I0110 13:42:26.032171  122899 wrap.go:47] PATCH /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events/pdb-1.1578807cec07a2c9: (2.047507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:26.032255  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.99347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.032528  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (3.902387ms)
I0110 13:42:26.034948  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3: (9.269812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40474]
I0110 13:42:26.035675  122899 disruption.go:367] deletePod called on pod "mid-pod3"
I0110 13:42:26.035688  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod3, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:26.035693  122899 disruption.go:370] No matching pdb for pod "mid-pod3"
I0110 13:42:26.039236  122899 disruption.go:338] updatePod called on pod "preemptor-pod"
I0110 13:42:26.039281  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:26.039288  122899 disruption.go:341] No matching pdb for pod "preemptor-pod"
I0110 13:42:26.040341  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:26.040919  122899 scheduler.go:450] Skip schedule deleting pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:26.040417  122899 disruption.go:367] deletePod called on pod "preemptor-pod"
I0110 13:42:26.045497  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:26.045504  122899 disruption.go:370] No matching pdb for pod "preemptor-pod"
I0110 13:42:26.041138  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (5.350837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.048651  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (2.887134ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:26.048993  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.145678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.051536  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2: (1.092622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.053956  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod3: (922.04µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.057489  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.56335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.061323  122899 wrap.go:47] DELETE /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets: (3.464244ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.061348  122899 disruption.go:320] remove DB "pdb-1"
I0110 13:42:26.061377  122899 disruption.go:481] PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" has been deleted
I0110 13:42:26.061399  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (30.305µs)
I0110 13:42:26.065709  122899 wrap.go:47] DELETE /api/v1/nodes: (4.029171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.068013  122899 wrap.go:47] POST /api/v1/nodes: (1.664161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.069777  122899 wrap.go:47] POST /api/v1/nodes: (1.385361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.071965  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.753043ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.072019  122899 disruption.go:326] addPod called on pod "low-pod1"
I0110 13:42:26.072063  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:26.072081  122899 disruption.go:329] No matching pdb for pod "low-pod1"
I0110 13:42:26.174481  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.741079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.176300  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.31913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.179471  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1/status: (2.688975ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.179655  122899 disruption.go:338] updatePod called on pod "low-pod1"
I0110 13:42:26.179683  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:26.179689  122899 disruption.go:341] No matching pdb for pod "low-pod1"
I0110 13:42:26.181825  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.891451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.182410  122899 disruption.go:326] addPod called on pod "mid-pod2"
I0110 13:42:26.182428  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:26.182433  122899 disruption.go:329] No matching pdb for pod "mid-pod2"
I0110 13:42:26.285335  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (2.789133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.287095  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (1.270643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:26.289519  122899 disruption.go:338] updatePod called on pod "mid-pod2"
I0110 13:42:26.289550  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:26.289557  122899 disruption.go:341] No matching pdb for pod "mid-pod2"
I0110 13:42:26.290121  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2/status: (2.58306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
E0110 13:42:26.395436  122899 event.go:212] Unable to write event: 'Patch http://127.0.0.1:36035/api/v1/namespaces/prebind-plugin647ba15f-14dd-11e9-8838-0242ac110002/events/test-pod.1578806bf376a95a: dial tcp 127.0.0.1:36035: connect: connection refused' (may retry after sleeping)
I0110 13:42:26.960959  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:26.961821  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:26.962981  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:26.963009  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:26.964998  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:27.292411  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.669841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:27.294292  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (1.302807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:27.296265  122899 wrap.go:47] POST /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets: (1.50418ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:27.296537  122899 disruption.go:307] add DB "pdb-1"
I0110 13:42:27.298646  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.794269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:27.298834  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (2.264479ms)
I0110 13:42:27.298926  122899 disruption.go:314] update DB "pdb-1"
I0110 13:42:27.300587  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.418579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:27.300840  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (1.881222ms)
I0110 13:42:27.586565  122899 wrap.go:47] GET /api/v1/namespaces/default: (1.514947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:27.588259  122899 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.28301ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:27.589812  122899 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.208613ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:27.961172  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:27.961969  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:27.963158  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:27.963189  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:27.965166  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:28.298334  122899 wrap.go:47] GET /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets: (1.353502ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:28.300775  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.949568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:28.301049  122899 disruption.go:326] addPod called on pod "preemptor-pod"
I0110 13:42:28.301065  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:28.301070  122899 disruption.go:329] No matching pdb for pod "preemptor-pod"
I0110 13:42:28.301076  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:28.301085  122899 scheduler.go:454] Attempting to schedule pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:28.301229  122899 factory.go:1070] Unable to schedule preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod: no fit: 0/2 nodes are available: 2 Insufficient cpu.; waiting
I0110 13:42:28.301302  122899 factory.go:1175] Updating pod condition for preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 13:42:28.303420  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (1.468975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0110 13:42:28.304135  122899 disruption.go:338] updatePod called on pod "preemptor-pod"
I0110 13:42:28.304155  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:28.304161  122899 disruption.go:341] No matching pdb for pod "preemptor-pod"
I0110 13:42:28.304193  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (2.624722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:28.304407  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod/status: (2.835154ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40970]
I0110 13:42:28.305804  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.049671ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:28.306069  122899 generic_scheduler.go:1108] Node node-1 is a potential node for preemption.
I0110 13:42:28.306089  122899 generic_scheduler.go:1108] Node node-2 is a potential node for preemption.
I0110 13:42:28.308367  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod/status: (1.960408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:28.308906  122899 disruption.go:338] updatePod called on pod "preemptor-pod"
I0110 13:42:28.308934  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:28.308941  122899 disruption.go:341] No matching pdb for pod "preemptor-pod"
I0110 13:42:28.311179  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (2.397595ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:28.311839  122899 disruption.go:338] updatePod called on pod "mid-pod2"
I0110 13:42:28.311901  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:28.311923  122899 disruption.go:341] No matching pdb for pod "mid-pod2"
I0110 13:42:28.312250  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:28.312283  122899 scheduler.go:454] Attempting to schedule pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:28.312385  122899 factory.go:1070] Unable to schedule preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod: no fit: 0/2 nodes are available: 2 Insufficient cpu.; waiting
I0110 13:42:28.312415  122899 factory.go:1175] Updating pod condition for preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I0110 13:42:28.313070  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (1.325287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:28.314936  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.603732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41002]
I0110 13:42:28.315351  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod/status: (2.687004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0110 13:42:28.316325  122899 wrap.go:47] PATCH /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events/preemptor-pod.1578807d73a5f317: (2.657885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:28.317140  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.060452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41000]
I0110 13:42:28.961377  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:28.962116  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:28.963341  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:28.963361  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:28.965316  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:29.303569  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (2.08968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:29.407269  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (1.676203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:29.411760  122899 disruption.go:338] updatePod called on pod "low-pod1"
I0110 13:42:29.411817  122899 disruption.go:344] updatePod "low-pod1" -> PDB "pdb-1"
I0110 13:42:29.414689  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (2.583148ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41002]
I0110 13:42:29.414979  122899 disruption.go:314] update DB "pdb-1"
I0110 13:42:29.414990  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (3.151518ms)
I0110 13:42:29.416423  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (8.566272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:29.416763  122899 disruption.go:367] deletePod called on pod "low-pod1"
I0110 13:42:29.416791  122899 disruption.go:373] deletePod "low-pod1" -> PDB "pdb-1"
I0110 13:42:29.418208  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (2.771103ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41002]
I0110 13:42:29.418811  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (3.794876ms)
I0110 13:42:29.423138  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (3.706239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41002]
I0110 13:42:29.423724  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (4.242934ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.423848  122899 disruption.go:314] update DB "pdb-1"
I0110 13:42:29.423922  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (5.089146ms)
I0110 13:42:29.424275  122899 disruption.go:338] updatePod called on pod "mid-pod2"
I0110 13:42:29.424286  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.424299  122899 disruption.go:341] No matching pdb for pod "mid-pod2"
I0110 13:42:29.425758  122899 wrap.go:47] PUT /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets/pdb-1/status: (1.5776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41002]
I0110 13:42:29.426524  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (2.58013ms)
I0110 13:42:29.429039  122899 wrap.go:47] PATCH /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events/pdb-1.1578807db6432936: (4.120696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.430007  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (13.259183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:29.430654  122899 disruption.go:367] deletePod called on pod "mid-pod2"
I0110 13:42:29.430688  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.430703  122899 disruption.go:370] No matching pdb for pod "mid-pod2"
I0110 13:42:29.434809  122899 scheduling_queue.go:821] About to try and schedule pod preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:29.434883  122899 scheduler.go:450] Skip schedule deleting pod: preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/preemptor-pod
I0110 13:42:29.435314  122899 disruption.go:338] updatePod called on pod "preemptor-pod"
I0110 13:42:29.435345  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.435352  122899 disruption.go:341] No matching pdb for pod "preemptor-pod"
I0110 13:42:29.438239  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/events: (2.836192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.441276  122899 disruption.go:367] deletePod called on pod "preemptor-pod"
I0110 13:42:29.441322  122899 disruption.go:401] No PodDisruptionBudgets found for pod preemptor-pod, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.441330  122899 disruption.go:370] No matching pdb for pod "preemptor-pod"
I0110 13:42:29.442281  122899 wrap.go:47] DELETE /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (11.366953ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40804]
I0110 13:42:29.446980  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.08383ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.449521  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (941.201µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.451922  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/preemptor-pod: (848.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.456477  122899 wrap.go:47] DELETE /apis/policy/v1beta1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/poddisruptionbudgets: (4.079765ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.457143  122899 disruption.go:320] remove DB "pdb-1"
I0110 13:42:29.457183  122899 disruption.go:481] PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" has been deleted
I0110 13:42:29.457205  122899 disruption.go:472] Finished syncing PodDisruptionBudget "preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pdb-1" (29.401µs)
I0110 13:42:29.462929  122899 wrap.go:47] DELETE /api/v1/nodes: (5.839586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.465319  122899 wrap.go:47] POST /api/v1/nodes: (1.630435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.466998  122899 wrap.go:47] POST /api/v1/nodes: (1.352119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.470410  122899 wrap.go:47] POST /api/v1/nodes: (2.937829ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.472762  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.956061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.472807  122899 disruption.go:326] addPod called on pod "low-pod1"
I0110 13:42:29.472879  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.472891  122899 disruption.go:329] No matching pdb for pod "low-pod1"
I0110 13:42:29.575070  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.667763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.577274  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1: (1.762777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.580151  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod1/status: (2.366658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.580814  122899 disruption.go:338] updatePod called on pod "low-pod1"
I0110 13:42:29.580883  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.580892  122899 disruption.go:341] No matching pdb for pod "low-pod1"
I0110 13:42:29.582665  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.735491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.582826  122899 disruption.go:326] addPod called on pod "mid-pod1"
I0110 13:42:29.582899  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.582910  122899 disruption.go:329] No matching pdb for pod "mid-pod1"
I0110 13:42:29.685038  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod1: (1.718443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.687783  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod1: (2.186826ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.690402  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod1/status: (2.091096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.690687  122899 disruption.go:338] updatePod called on pod "mid-pod1"
I0110 13:42:29.690721  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod1, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.690729  122899 disruption.go:341] No matching pdb for pod "mid-pod1"
I0110 13:42:29.692834  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.928574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.692981  122899 disruption.go:326] addPod called on pod "low-pod2"
I0110 13:42:29.693009  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.693014  122899 disruption.go:329] No matching pdb for pod "low-pod2"
I0110 13:42:29.799469  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2: (4.156777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.801985  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2: (1.99067ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.805888  122899 disruption.go:338] updatePod called on pod "low-pod2"
I0110 13:42:29.805919  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.805925  122899 disruption.go:341] No matching pdb for pod "low-pod2"
I0110 13:42:29.806116  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod2/status: (3.650227ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.809200  122899 disruption.go:326] addPod called on pod "mid-pod2"
I0110 13:42:29.809229  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.809235  122899 disruption.go:329] No matching pdb for pod "mid-pod2"
I0110 13:42:29.809364  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (2.672679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.912172  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (1.832371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.914105  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2: (1.396219ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.916552  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/mid-pod2/status: (2.016897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.917262  122899 disruption.go:338] updatePod called on pod "mid-pod2"
I0110 13:42:29.917293  122899 disruption.go:401] No PodDisruptionBudgets found for pod mid-pod2, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.917299  122899 disruption.go:341] No matching pdb for pod "mid-pod2"
I0110 13:42:29.918668  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (1.628812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:29.919177  122899 disruption.go:326] addPod called on pod "low-pod4"
I0110 13:42:29.919211  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod4, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:29.919227  122899 disruption.go:329] No matching pdb for pod "low-pod4"
I0110 13:42:29.961802  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:29.962216  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:29.963523  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:29.963521  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:29.965502  122899 reflector.go:215] k8s.io/client-go/informers/factory.go:132: forcing resync
I0110 13:42:30.022971  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod4: (3.430762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:30.027036  122899 wrap.go:47] GET /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod4: (3.464058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:30.034428  122899 wrap.go:47] PUT /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods/low-pod4/status: (6.911305ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:30.035073  122899 disruption.go:338] updatePod called on pod "low-pod4"
I0110 13:42:30.035102  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod4, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:30.035107  122899 disruption.go:341] No matching pdb for pod "low-pod4"
I0110 13:42:30.038233  122899 disruption.go:326] addPod called on pod "low-pod5"
I0110 13:42:30.038271  122899 disruption.go:401] No PodDisruptionBudgets found for pod low-pod5, PodDisruptionBudget controller will avoid syncing.
I0110 13:42:30.038278  122899 disruption.go:329] No matching pdb for pod "low-pod5"
I0110 13:42:30.039400  122899 wrap.go:47] POST /api/v1/namespaces/preemption-pdb8da9ec5f-14dd-11e9-8838-0242ac110002/pods: (3.883381ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:30.040035  122899 disruption.go:303] Shutting down disruption controller
E0110 13:42:30.040573  122899 scheduling_queue.go:824] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0110 13:42:30.041160  122899 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?resourceVersion=23151&timeout=8m32s&timeoutSeconds=512&watch=true: (9.080480811s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40428]
I0110 13:42:30.041188  122899 wrap.go:47] GET /api/v1/persistentvolumes?resourceVersion=23151&timeout=7m20s&timeoutSeconds=440&watch=true: (9.079445796s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40436]
I0110 13:42:30.041267  122899 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=23151&timeout=7m33s&timeoutSeconds=453&watch=true: (9.080302553s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40424]
I0110 13:42:30.041341  122899 wrap.go:47] GET /api/v1/services?resourceVersion=23156&timeout=9m10s&timeoutSeconds=550&watch=true: (9.079417524s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40420]
I0110 13:42:30.041385  122899 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=23151&timeout=9m13s&timeoutSeconds=553&watch=true: (9.079907267s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40422]
I0110 13:42:30.041434  122899 wrap.go:47] GET /api/v1/persistentvolumeclaims?resourceVersion=23151&timeout=6m25s&timeoutSeconds=385&watch=true: (9.078929885s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40418]
I0110 13:42:30.041467  122899 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=23151&timeout=8m18s&timeoutSeconds=498&watch=true: (9.080040807s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40416]
I0110 13:42:30.041536  122899 wrap.go:47] GET /apis/apps/v1/replicasets?resourceVersion=23151&timeout=5m23s&timeoutSeconds=323&watch=true: (8.079601384s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40452]
I0110 13:42:30.041543  122899 wrap.go:47] GET /apis/apps/v1/statefulsets?resourceVersion=23151&timeout=8m19s&timeoutSeconds=499&watch=true: (8.079471924s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40456]
I0110 13:42:30.041701  122899 wrap.go:47] GET /apis/apps/v1/deployments?resourceVersion=23151&timeout=9m18s&timeoutSeconds=558&watch=true: (8.07185495s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40454]
I0110 13:42:30.041714  122899 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=23151&timeout=7m39s&timeoutSeconds=459&watch=true: (8.079783657s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40446]
I0110 13:42:30.041758  122899 wrap.go:47] GET /apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=23151&timeout=5m32s&timeoutSeconds=332&watch=true: (9.080005897s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40434]
I0110 13:42:30.041803  122899 wrap.go:47] GET /api/v1/nodes?resourceVersion=23151&timeout=6m12s&timeoutSeconds=372&watch=true: (9.08204242s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40426]
I0110 13:42:30.041811  122899 wrap.go:47] GET /api/v1/replicationcontrollers?resourceVersion=23151&timeout=9m13s&timeoutSeconds=553&watch=true: (8.079211444s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40458]
I0110 13:42:30.041911  122899 wrap.go:47] GET /api/v1/pods?resourceVersion=23151&timeout=8m15s&timeoutSeconds=495&watch=true: (8.077991723s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40460]
I0110 13:42:30.041917  122899 wrap.go:47] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=23151&timeoutSeconds=409&watch=true: (9.183062623s) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:40330]
I0110 13:42:30.064439  122899 wrap.go:47] DELETE /api/v1/nodes: (15.73027ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:30.064677  122899 controller.go:170] Shutting down kubernetes service endpoint reconciler
I0110 13:42:30.066275  122899 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.359806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:30.070341  122899 wrap.go:47] PUT /api/v1/namespaces/default/endpoints/kubernetes: (3.638414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41030]
I0110 13:42:30.070774  122899 feature_gate.go:226] feature gates: &{map[PodPriority:true]}
preemption_test.go:927: ================ Running test: A non-PDB violating pod is preempted despite its higher priority
preemption_test.go:927: ================ Running test: A node without any PDB violating pods is preferred for preemption
preemption_test.go:927: ================ Running test: A node with fewer PDB violating pods is preferred for preemption
preemption_test.go:940: Test [A node with fewer PDB violating pods is preferred for preemption]: Error running pause pod: Error creating pause pod: 0-length response with status code: 200 and content type: 
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190110-133659.xml

Find low-pod1 mentions in log files | View test history on testgrid


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0110 13:23:49.062] process 213 exited with code 0 after 0.0m
I0110 13:23:49.063] Call:  gcloud config get-value account
I0110 13:23:49.376] process 225 exited with code 0 after 0.0m
I0110 13:23:49.376] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0110 13:23:49.376] Call:  kubectl get -oyaml pods/e2852037-14da-11e9-a09b-0a580a6c03f2
W0110 13:23:49.490] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0110 13:23:49.493] Command failed
I0110 13:23:49.493] process 237 exited with code 1 after 0.0m
E0110 13:23:49.493] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/e2852037-14da-11e9-a09b-0a580a6c03f2']' returned non-zero exit status 1
I0110 13:23:49.493] Root: /workspace
I0110 13:23:49.494] cd to /workspace
I0110 13:23:49.494] Checkout: /workspace/k8s.io/kubernetes master to /workspace/k8s.io/kubernetes
I0110 13:23:49.494] Call:  git init k8s.io/kubernetes
... skipping 795 lines ...
W0110 13:32:09.484] I0110 13:32:09.483328   56685 daemon_controller.go:267] Starting daemon sets controller
W0110 13:32:09.484] I0110 13:32:09.483359   56685 controller_utils.go:1021] Waiting for caches to sync for daemon sets controller
W0110 13:32:09.484] I0110 13:32:09.483686   56685 controllermanager.go:516] Started "ttl"
W0110 13:32:09.485] I0110 13:32:09.483902   56685 ttl_controller.go:116] Starting TTL controller
W0110 13:32:09.485] I0110 13:32:09.483927   56685 controller_utils.go:1021] Waiting for caches to sync for TTL controller
W0110 13:32:09.485] I0110 13:32:09.484014   56685 node_lifecycle_controller.go:77] Sending events to api server
W0110 13:32:09.486] E0110 13:32:09.484104   56685 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0110 13:32:09.486] W0110 13:32:09.484122   56685 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0110 13:32:09.486] I0110 13:32:09.484551   56685 controllermanager.go:516] Started "pvc-protection"
W0110 13:32:09.486] I0110 13:32:09.484633   56685 pvc_protection_controller.go:99] Starting PVC protection controller
W0110 13:32:09.487] I0110 13:32:09.484644   56685 controller_utils.go:1021] Waiting for caches to sync for PVC protection controller
W0110 13:32:09.487] I0110 13:32:09.484901   56685 controllermanager.go:516] Started "pv-protection"
W0110 13:32:09.487] I0110 13:32:09.485000   56685 pv_protection_controller.go:81] Starting PV protection controller
... skipping 41 lines ...
W0110 13:32:09.607] I0110 13:32:09.605046   56685 node_lifecycle_controller.go:294] Controller is using taint based evictions.
W0110 13:32:09.607] I0110 13:32:09.605095   56685 taint_manager.go:175] Sending events to api server.
W0110 13:32:09.607] I0110 13:32:09.605400   56685 node_lifecycle_controller.go:360] Controller will taint node by condition.
W0110 13:32:09.607] I0110 13:32:09.605439   56685 controllermanager.go:516] Started "nodelifecycle"
W0110 13:32:09.607] I0110 13:32:09.605561   56685 node_lifecycle_controller.go:405] Starting node controller
W0110 13:32:09.608] I0110 13:32:09.605578   56685 controller_utils.go:1021] Waiting for caches to sync for taint controller
W0110 13:32:09.608] E0110 13:32:09.605836   56685 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0110 13:32:09.608] W0110 13:32:09.605870   56685 controllermanager.go:508] Skipping "service"
W0110 13:32:09.608] I0110 13:32:09.607403   56685 controllermanager.go:516] Started "persistentvolume-binder"
W0110 13:32:09.608] I0110 13:32:09.608541   56685 controllermanager.go:516] Started "clusterrole-aggregation"
W0110 13:32:09.609] I0110 13:32:09.609639   56685 pv_controller_base.go:271] Starting persistent volume controller
W0110 13:32:09.610] I0110 13:32:09.609677   56685 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0110 13:32:09.610] I0110 13:32:09.609716   56685 controller_utils.go:1021] Waiting for caches to sync for persistent volume controller
... skipping 20 lines ...
W0110 13:32:09.666] I0110 13:32:09.663907   56685 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W0110 13:32:09.666] I0110 13:32:09.663956   56685 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
W0110 13:32:09.666] I0110 13:32:09.664002   56685 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
W0110 13:32:09.666] I0110 13:32:09.664032   56685 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0110 13:32:09.667] I0110 13:32:09.664066   56685 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W0110 13:32:09.667] I0110 13:32:09.664098   56685 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
W0110 13:32:09.667] E0110 13:32:09.664135   56685 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0110 13:32:09.667] I0110 13:32:09.664164   56685 controllermanager.go:516] Started "resourcequota"
W0110 13:32:09.667] I0110 13:32:09.664216   56685 resource_quota_controller.go:276] Starting resource quota controller
W0110 13:32:09.667] I0110 13:32:09.664232   56685 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0110 13:32:09.667] I0110 13:32:09.664251   56685 resource_quota_monitor.go:301] QuotaMonitor running
W0110 13:32:09.668] I0110 13:32:09.664824   56685 controllermanager.go:516] Started "job"
W0110 13:32:09.668] I0110 13:32:09.665456   56685 controllermanager.go:516] Started "deployment"
... skipping 21 lines ...
W0110 13:32:09.804] I0110 13:32:09.803556   56685 controller_utils.go:1028] Caches are synced for GC controller
W0110 13:32:09.804] I0110 13:32:09.804339   56685 controller_utils.go:1028] Caches are synced for endpoint controller
W0110 13:32:09.806] I0110 13:32:09.805798   56685 controller_utils.go:1028] Caches are synced for taint controller
W0110 13:32:09.806] I0110 13:32:09.805921   56685 taint_manager.go:198] Starting NoExecuteTaintManager
W0110 13:32:09.810] I0110 13:32:09.810100   56685 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0110 13:32:09.810] I0110 13:32:09.810515   56685 controller_utils.go:1028] Caches are synced for ReplicationController controller
W0110 13:32:09.821] E0110 13:32:09.821069   56685 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0110 13:32:09.822] E0110 13:32:09.821649   56685 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0110 13:32:09.868] I0110 13:32:09.868218   56685 controller_utils.go:1028] Caches are synced for deployment controller
W0110 13:32:09.869] I0110 13:32:09.868255   56685 controller_utils.go:1028] Caches are synced for disruption controller
W0110 13:32:09.869] I0110 13:32:09.868265   56685 disruption.go:294] Sending events to api server.
W0110 13:32:09.869] I0110 13:32:09.868218   56685 controller_utils.go:1028] Caches are synced for ReplicaSet controller
W0110 13:32:09.869] I0110 13:32:09.868333   56685 controller_utils.go:1028] Caches are synced for job controller
I0110 13:32:10.000] +++ [0110 13:32:09] On try 3, controller-manager: ok
W0110 13:32:10.100] I0110 13:32:10.083677   56685 controller_utils.go:1028] Caches are synced for daemon sets controller
W0110 13:32:10.193] W0110 13:32:10.193172   56685 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0110 13:32:10.285] I0110 13:32:10.284878   56685 controller_utils.go:1028] Caches are synced for PVC protection controller
W0110 13:32:10.365] I0110 13:32:10.364536   56685 controller_utils.go:1028] Caches are synced for resource quota controller
W0110 13:32:10.368] I0110 13:32:10.368380   56685 controller_utils.go:1028] Caches are synced for stateful set controller
W0110 13:32:10.385] I0110 13:32:10.385249   56685 controller_utils.go:1028] Caches are synced for PV protection controller
W0110 13:32:10.401] I0110 13:32:10.401316   56685 controller_utils.go:1028] Caches are synced for garbage collector controller
W0110 13:32:10.402] I0110 13:32:10.401361   56685 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
... skipping 27 lines ...
I0110 13:32:10.874] }+++ [0110 13:32:10] Testing kubectl version: check client only output matches expected output
I0110 13:32:11.007] Successful: the flag '--client' shows correct client info
I0110 13:32:11.014] (BSuccessful: the flag '--client' correctly has no server version info
I0110 13:32:11.017] (B+++ [0110 13:32:11] Testing kubectl version: verify json output
W0110 13:32:11.117] I0110 13:32:11.097263   56685 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0110 13:32:11.198] I0110 13:32:11.197731   56685 controller_utils.go:1028] Caches are synced for garbage collector controller
W0110 13:32:11.214] E0110 13:32:11.213453   56685 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0110 13:32:11.314] Successful: --output json has correct client info
I0110 13:32:11.315] (BSuccessful: --output json has correct server info
I0110 13:32:11.315] (B+++ [0110 13:32:11] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0110 13:32:11.315] Successful: --client --output json has correct client info
I0110 13:32:11.315] (BSuccessful: --client --output json has no server info
I0110 13:32:11.315] (B+++ [0110 13:32:11] Testing kubectl version: compare json output using additional --short flag
... skipping 48 lines ...
I0110 13:32:14.128] +++ working dir: /go/src/k8s.io/kubernetes
I0110 13:32:14.130] +++ command: run_RESTMapper_evaluation_tests
I0110 13:32:14.142] +++ [0110 13:32:14] Creating namespace namespace-1547127134-13981
I0110 13:32:14.211] namespace/namespace-1547127134-13981 created
I0110 13:32:14.277] Context "test" modified.
I0110 13:32:14.283] +++ [0110 13:32:14] Testing RESTMapper
I0110 13:32:14.398] +++ [0110 13:32:14] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0110 13:32:14.413] +++ exit code: 0
I0110 13:32:14.523] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0110 13:32:14.524] bindings                                                                      true         Binding
I0110 13:32:14.524] componentstatuses                 cs                                          false        ComponentStatus
I0110 13:32:14.524] configmaps                        cm                                          true         ConfigMap
I0110 13:32:14.524] endpoints                         ep                                          true         Endpoints
... skipping 609 lines ...
I0110 13:32:32.828] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0110 13:32:32.911] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0110 13:32:32.977] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0110 13:32:33.065] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0110 13:32:33.211] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:32:33.368] (Bpod/env-test-pod created
W0110 13:32:33.469] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0110 13:32:33.469] error: setting 'all' parameter but found a non empty selector. 
W0110 13:32:33.469] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 13:32:33.470] I0110 13:32:32.528991   53345 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0110 13:32:33.470] error: min-available and max-unavailable cannot be both specified
I0110 13:32:33.570] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0110 13:32:33.570] Name:               env-test-pod
I0110 13:32:33.570] Namespace:          test-kubectl-describe-pod
I0110 13:32:33.571] Priority:           0
I0110 13:32:33.571] PriorityClassName:  <none>
I0110 13:32:33.571] Node:               <none>
... skipping 145 lines ...
W0110 13:32:45.382] I0110 13:32:44.235886   56685 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W0110 13:32:45.382] I0110 13:32:44.906277   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127160-20787", Name:"modified", UID:"365e1e5b-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-28l8d
I0110 13:32:45.535] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:32:45.679] (Bpod/valid-pod created
I0110 13:32:45.775] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 13:32:45.929] (BSuccessful
I0110 13:32:45.929] message:Error from server: cannot restore map from string
I0110 13:32:45.929] has:cannot restore map from string
I0110 13:32:46.021] Successful
I0110 13:32:46.021] message:pod/valid-pod patched (no change)
I0110 13:32:46.021] has:patched (no change)
I0110 13:32:46.102] pod/valid-pod patched
I0110 13:32:46.194] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0110 13:32:46.689] (Bpod/valid-pod patched
I0110 13:32:46.781] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0110 13:32:46.851] (Bpod/valid-pod patched
I0110 13:32:46.939] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0110 13:32:47.086] (Bpod/valid-pod patched
I0110 13:32:47.175] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0110 13:32:47.334] (B+++ [0110 13:32:47] "kubectl patch with resourceVersion 490" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0110 13:32:47.435] E0110 13:32:45.922307   53345 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0110 13:32:47.561] pod "valid-pod" deleted
I0110 13:32:47.575] pod/valid-pod replaced
I0110 13:32:47.664] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0110 13:32:47.810] (BSuccessful
I0110 13:32:47.810] message:error: --grace-period must have --force specified
I0110 13:32:47.811] has:\-\-grace-period must have \-\-force specified
I0110 13:32:47.954] Successful
I0110 13:32:47.955] message:error: --timeout must have --force specified
I0110 13:32:47.955] has:\-\-timeout must have \-\-force specified
W0110 13:32:48.100] W0110 13:32:48.099422   56685 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0110 13:32:48.200] node/node-v1-test created
I0110 13:32:48.241] node/node-v1-test replaced
I0110 13:32:48.329] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0110 13:32:48.402] (Bnode "node-v1-test" deleted
I0110 13:32:48.495] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0110 13:32:48.746] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 58 lines ...
I0110 13:32:53.451] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:32:53.592] (Bpod/test-pod created
W0110 13:32:53.693] Edit cancelled, no changes made.
W0110 13:32:53.693] Edit cancelled, no changes made.
W0110 13:32:53.693] Edit cancelled, no changes made.
W0110 13:32:53.693] Edit cancelled, no changes made.
W0110 13:32:53.693] error: 'name' already has a value (valid-pod), and --overwrite is false
W0110 13:32:53.693] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 13:32:53.694] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0110 13:32:53.794] pod "test-pod" deleted
I0110 13:32:53.794] +++ [0110 13:32:53] Creating namespace namespace-1547127173-20924
I0110 13:32:53.822] namespace/namespace-1547127173-20924 created
I0110 13:32:53.885] Context "test" modified.
... skipping 41 lines ...
I0110 13:32:56.856] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0110 13:32:56.858] +++ working dir: /go/src/k8s.io/kubernetes
I0110 13:32:56.860] +++ command: run_kubectl_create_error_tests
I0110 13:32:56.870] +++ [0110 13:32:56] Creating namespace namespace-1547127176-6344
I0110 13:32:56.936] namespace/namespace-1547127176-6344 created
I0110 13:32:57.002] Context "test" modified.
I0110 13:32:57.009] +++ [0110 13:32:57] Testing kubectl create with error
W0110 13:32:57.109] Error: required flag(s) "filename" not set
W0110 13:32:57.109] 
W0110 13:32:57.109] 
W0110 13:32:57.110] Examples:
W0110 13:32:57.110]   # Create a pod using the data in pod.json.
W0110 13:32:57.110]   kubectl create -f ./pod.json
W0110 13:32:57.110]   
... skipping 38 lines ...
W0110 13:32:57.115]   kubectl create -f FILENAME [options]
W0110 13:32:57.115] 
W0110 13:32:57.115] Use "kubectl <command> --help" for more information about a given command.
W0110 13:32:57.115] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0110 13:32:57.115] 
W0110 13:32:57.115] required flag(s) "filename" not set
I0110 13:32:57.221] +++ [0110 13:32:57] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0110 13:32:57.322] kubectl convert is DEPRECATED and will be removed in a future version.
W0110 13:32:57.322] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0110 13:32:57.423] +++ exit code: 0
I0110 13:32:57.423] Recording: run_kubectl_apply_tests
I0110 13:32:57.423] Running command: run_kubectl_apply_tests
I0110 13:32:57.430] 
... skipping 13 lines ...
I0110 13:32:58.386] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0110 13:32:59.257] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0110 13:32:59.349] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:32:59.498] (Bpod/selector-test-pod created
I0110 13:32:59.589] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0110 13:32:59.675] (BSuccessful
I0110 13:32:59.675] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0110 13:32:59.675] has:pods "selector-test-pod-dont-apply" not found
I0110 13:32:59.747] pod "selector-test-pod" deleted
I0110 13:32:59.835] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:00.048] (Bpod/test-pod created (server dry run)
I0110 13:33:00.138] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:00.281] (Bpod/test-pod created
... skipping 6 lines ...
W0110 13:33:00.383] I0110 13:32:58.878999   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127177-23554", Name:"test-deployment-retainkeys", UID:"3e57fa8a-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-7495cff5f to 1
W0110 13:33:00.384] I0110 13:32:58.882220   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127177-23554", Name:"test-deployment-retainkeys-7495cff5f", UID:"3eb2b3f2-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-7495cff5f-gxh58
I0110 13:33:00.497] pod/test-pod configured (server dry run)
I0110 13:33:00.589] apply.sh:91: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0110 13:33:00.667] (Bpod "test-pod" deleted
I0110 13:33:00.891] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0110 13:33:00.992] E0110 13:33:00.895683   53345 autoregister_controller.go:190] v1alpha1.mygroup.example.com failed with : apiservices.apiregistration.k8s.io "v1alpha1.mygroup.example.com" already exists
W0110 13:33:01.143] I0110 13:33:01.142470   53345 clientconn.go:551] parsed scheme: ""
W0110 13:33:01.143] I0110 13:33:01.142509   53345 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0110 13:33:01.143] I0110 13:33:01.142566   53345 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0110 13:33:01.143] I0110 13:33:01.142674   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:33:01.144] I0110 13:33:01.143138   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:33:01.148] I0110 13:33:01.148050   53345 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0110 13:33:01.237] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0110 13:33:01.338] kind.mygroup.example.com/myobj created (server dry run)
I0110 13:33:01.338] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0110 13:33:01.415] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:01.564] (Bpod/a created
I0110 13:33:02.862] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0110 13:33:02.942] (BSuccessful
I0110 13:33:02.942] message:Error from server (NotFound): pods "b" not found
I0110 13:33:02.942] has:pods "b" not found
I0110 13:33:03.103] pod/b created
I0110 13:33:03.115] pod/a pruned
I0110 13:33:04.600] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0110 13:33:04.682] (BSuccessful
I0110 13:33:04.682] message:Error from server (NotFound): pods "a" not found
I0110 13:33:04.682] has:pods "a" not found
I0110 13:33:04.759] pod "b" deleted
I0110 13:33:04.852] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:05.007] (Bpod/a created
I0110 13:33:05.099] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0110 13:33:05.177] (BSuccessful
I0110 13:33:05.178] message:Error from server (NotFound): pods "b" not found
I0110 13:33:05.178] has:pods "b" not found
I0110 13:33:05.321] pod/b created
I0110 13:33:05.416] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0110 13:33:05.500] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0110 13:33:05.577] (Bpod "a" deleted
I0110 13:33:05.582] pod "b" deleted
I0110 13:33:05.737] Successful
I0110 13:33:05.738] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0110 13:33:05.738] has:all resources selected for prune without explicitly passing --all
I0110 13:33:05.884] pod/a created
I0110 13:33:05.890] pod/b created
I0110 13:33:05.898] service/prune-svc created
I0110 13:33:07.196] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0110 13:33:07.278] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 127 lines ...
I0110 13:33:18.511] Context "test" modified.
I0110 13:33:18.517] +++ [0110 13:33:18] Testing kubectl create filter
I0110 13:33:18.600] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:18.743] (Bpod/selector-test-pod created
I0110 13:33:18.830] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0110 13:33:18.909] (BSuccessful
I0110 13:33:18.909] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0110 13:33:18.909] has:pods "selector-test-pod-dont-apply" not found
I0110 13:33:18.983] pod "selector-test-pod" deleted
I0110 13:33:19.001] +++ exit code: 0
I0110 13:33:19.032] Recording: run_kubectl_apply_deployments_tests
I0110 13:33:19.032] Running command: run_kubectl_apply_deployments_tests
I0110 13:33:19.051] 
... skipping 28 lines ...
I0110 13:33:20.932] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:21.015] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:21.101] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:21.249] (Bdeployment.extensions/nginx created
I0110 13:33:21.342] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0110 13:33:25.526] (BSuccessful
I0110 13:33:25.526] message:Error from server (Conflict): error when applying patch:
I0110 13:33:25.527] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547127199-25734\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0110 13:33:25.527] to:
I0110 13:33:25.527] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0110 13:33:25.527] Name: "nginx", Namespace: "namespace-1547127199-25734"
I0110 13:33:25.528] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547127199-25734\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "name":"nginx" "resourceVersion":"707" "generation":'\x01' "labels":map["name":"nginx"] "namespace":"namespace-1547127199-25734" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547127199-25734/deployments/nginx" "uid":"4c081b8b-14dc-11e9-9eb1-0242ac110002" "creationTimestamp":"2019-01-10T13:33:21Z"] "spec":map["strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["spec":map["securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst"] "metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]]]] "status":map["unavailableReplicas":'\x03' "conditions":[map["message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2019-01-10T13:33:21Z" "lastTransitionTime":"2019-01-10T13:33:21Z" "reason":"MinimumReplicasUnavailable"]] "observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03']]}
I0110 13:33:25.529] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0110 13:33:25.529] has:Error from server (Conflict)
W0110 13:33:25.629] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0110 13:33:25.629] I0110 13:33:16.639142   53345 controller.go:606] quota admission added evaluator for: jobs.batch
W0110 13:33:25.630] I0110 13:33:16.652817   56685 event.go:221] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1547127196-22478", Name:"pi", UID:"4948f640-14dc-11e9-9eb1-0242ac110002", APIVersion:"batch/v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-mrvzt
W0110 13:33:25.630] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0110 13:33:25.630] I0110 13:33:17.158732   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127196-22478", Name:"nginx-extensions", UID:"49975b1f-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-extensions-6fb4b564f5 to 1
W0110 13:33:25.631] I0110 13:33:17.161702   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127196-22478", Name:"nginx-extensions-6fb4b564f5", UID:"4997ec57-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-extensions-6fb4b564f5-nttc9
... skipping 4 lines ...
W0110 13:33:25.632] I0110 13:33:17.855935   53345 controller.go:606] quota admission added evaluator for: cronjobs.batch
W0110 13:33:25.632] I0110 13:33:19.609482   53345 controller.go:606] quota admission added evaluator for: deployments.extensions
W0110 13:33:25.632] I0110 13:33:19.614653   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127199-25734", Name:"my-depl", UID:"4b0e3481-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-559b7bc95d to 1
W0110 13:33:25.632] I0110 13:33:19.618349   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127199-25734", Name:"my-depl-559b7bc95d", UID:"4b0eb3f6-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-559b7bc95d-z9ptl
W0110 13:33:25.632] I0110 13:33:20.140683   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127199-25734", Name:"my-depl", UID:"4b0e3481-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-6676598dcb to 1
W0110 13:33:25.633] I0110 13:33:20.144831   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127199-25734", Name:"my-depl-6676598dcb", UID:"4b5e9957-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-6676598dcb-67mm9
W0110 13:33:25.633] E0110 13:33:20.750462   56685 replica_set.go:450] Sync "namespace-1547127199-25734/my-depl-6676598dcb" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-6676598dcb": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547127199-25734/my-depl-6676598dcb, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 4b5e9957-14dc-11e9-9eb1-0242ac110002, UID in object meta: 
W0110 13:33:25.633] I0110 13:33:20.764077   53345 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0110 13:33:25.634] I0110 13:33:21.252636   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127199-25734", Name:"nginx", UID:"4c081b8b-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W0110 13:33:25.634] I0110 13:33:21.254844   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127199-25734", Name:"nginx-5d56d6b95f", UID:"4c089659-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-nzx5k
W0110 13:33:25.634] I0110 13:33:21.256389   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127199-25734", Name:"nginx-5d56d6b95f", UID:"4c089659-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-5vvls
W0110 13:33:25.634] I0110 13:33:21.257838   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127199-25734", Name:"nginx-5d56d6b95f", UID:"4c089659-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-tw6zb
W0110 13:33:29.740] E0110 13:33:29.739545   56685 replica_set.go:450] Sync "namespace-1547127199-25734/nginx-5d56d6b95f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-5d56d6b95f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547127199-25734/nginx-5d56d6b95f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 4c089659-14dc-11e9-9eb1-0242ac110002, UID in object meta: 
I0110 13:33:30.725] deployment.extensions/nginx configured
I0110 13:33:30.812] Successful
I0110 13:33:30.812] message:        "name": "nginx2"
I0110 13:33:30.812]           "name": "nginx2"
I0110 13:33:30.812] has:"name": "nginx2"
W0110 13:33:30.913] I0110 13:33:30.727776   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127199-25734", Name:"nginx", UID:"51add541-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7777658b9d to 3
... skipping 141 lines ...
I0110 13:33:37.875] +++ [0110 13:33:37] Creating namespace namespace-1547127217-18270
I0110 13:33:37.944] namespace/namespace-1547127217-18270 created
I0110 13:33:38.011] Context "test" modified.
I0110 13:33:38.018] +++ [0110 13:33:38] Testing kubectl get
I0110 13:33:38.104] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:38.189] (BSuccessful
I0110 13:33:38.190] message:Error from server (NotFound): pods "abc" not found
I0110 13:33:38.190] has:pods "abc" not found
I0110 13:33:38.277] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:38.360] (BSuccessful
I0110 13:33:38.360] message:Error from server (NotFound): pods "abc" not found
I0110 13:33:38.361] has:pods "abc" not found
I0110 13:33:38.445] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:38.527] (BSuccessful
I0110 13:33:38.527] message:{
I0110 13:33:38.527]     "apiVersion": "v1",
I0110 13:33:38.527]     "items": [],
... skipping 23 lines ...
I0110 13:33:38.840] has not:No resources found
I0110 13:33:38.918] Successful
I0110 13:33:38.919] message:NAME
I0110 13:33:38.919] has not:No resources found
I0110 13:33:39.007] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:39.118] (BSuccessful
I0110 13:33:39.118] message:error: the server doesn't have a resource type "foobar"
I0110 13:33:39.118] has not:No resources found
I0110 13:33:39.200] Successful
I0110 13:33:39.200] message:No resources found.
I0110 13:33:39.200] has:No resources found
I0110 13:33:39.282] Successful
I0110 13:33:39.283] message:
I0110 13:33:39.283] has not:No resources found
I0110 13:33:39.363] Successful
I0110 13:33:39.364] message:No resources found.
I0110 13:33:39.364] has:No resources found
I0110 13:33:39.449] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:39.530] (BSuccessful
I0110 13:33:39.530] message:Error from server (NotFound): pods "abc" not found
I0110 13:33:39.530] has:pods "abc" not found
I0110 13:33:39.532] FAIL!
I0110 13:33:39.532] message:Error from server (NotFound): pods "abc" not found
I0110 13:33:39.532] has not:List
I0110 13:33:39.532] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0110 13:33:39.642] Successful
I0110 13:33:39.643] message:I0110 13:33:39.591941   69321 loader.go:359] Config loaded from file /tmp/tmp.7lbVgshSX7/.kube/config
I0110 13:33:39.643] I0110 13:33:39.592390   69321 loader.go:359] Config loaded from file /tmp/tmp.7lbVgshSX7/.kube/config
I0110 13:33:39.643] I0110 13:33:39.593798   69321 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 995 lines ...
I0110 13:33:43.068] }
I0110 13:33:43.151] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 13:33:43.379] (B<no value>Successful
I0110 13:33:43.379] message:valid-pod:
I0110 13:33:43.379] has:valid-pod:
I0110 13:33:43.457] Successful
I0110 13:33:43.457] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0110 13:33:43.457] 	template was:
I0110 13:33:43.458] 		{.missing}
I0110 13:33:43.458] 	object given to jsonpath engine was:
I0110 13:33:43.458] 		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"uid":"58fca173-14dc-11e9-9eb1-0242ac110002", "resourceVersion":"802", "creationTimestamp":"2019-01-10T13:33:42Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1547127222-202", "selfLink":"/api/v1/namespaces/namespace-1547127222-202/pods/valid-pod"}, "spec":map[string]interface {}{"priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"memory":"512Mi", "cpu":"1"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler"}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod"}
I0110 13:33:43.459] has:missing is not found
I0110 13:33:43.535] Successful
I0110 13:33:43.535] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0110 13:33:43.535] 	template was:
I0110 13:33:43.535] 		{{.missing}}
I0110 13:33:43.535] 	raw data was:
I0110 13:33:43.536] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-10T13:33:42Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547127222-202","resourceVersion":"802","selfLink":"/api/v1/namespaces/namespace-1547127222-202/pods/valid-pod","uid":"58fca173-14dc-11e9-9eb1-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0110 13:33:43.536] 	object given to template engine was:
I0110 13:33:43.536] 		map[apiVersion:v1 kind:Pod metadata:map[resourceVersion:802 selfLink:/api/v1/namespaces/namespace-1547127222-202/pods/valid-pod uid:58fca173-14dc-11e9-9eb1-0242ac110002 creationTimestamp:2019-01-10T13:33:42Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547127222-202] spec:map[dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log]]] status:map[phase:Pending qosClass:Guaranteed]]
I0110 13:33:43.536] has:map has no entry for key "missing"
W0110 13:33:43.637] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0110 13:33:44.610] E0110 13:33:44.609814   69721 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0110 13:33:44.711] Successful
I0110 13:33:44.711] message:NAME        READY   STATUS    RESTARTS   AGE
I0110 13:33:44.711] valid-pod   0/1     Pending   0          1s
I0110 13:33:44.711] has:STATUS
I0110 13:33:44.712] Successful
... skipping 80 lines ...
I0110 13:33:46.881]   terminationGracePeriodSeconds: 30
I0110 13:33:46.881] status:
I0110 13:33:46.881]   phase: Pending
I0110 13:33:46.881]   qosClass: Guaranteed
I0110 13:33:46.881] has:name: valid-pod
I0110 13:33:46.881] Successful
I0110 13:33:46.881] message:Error from server (NotFound): pods "invalid-pod" not found
I0110 13:33:46.881] has:"invalid-pod" not found
I0110 13:33:46.940] pod "valid-pod" deleted
I0110 13:33:47.032] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:33:47.175] (Bpod/redis-master created
I0110 13:33:47.179] pod/valid-pod created
I0110 13:33:47.266] Successful
... skipping 317 lines ...
I0110 13:33:51.278] Running command: run_create_secret_tests
I0110 13:33:51.297] 
I0110 13:33:51.299] +++ Running case: test-cmd.run_create_secret_tests 
I0110 13:33:51.301] +++ working dir: /go/src/k8s.io/kubernetes
I0110 13:33:51.303] +++ command: run_create_secret_tests
I0110 13:33:51.391] Successful
I0110 13:33:51.392] message:Error from server (NotFound): secrets "mysecret" not found
I0110 13:33:51.392] has:secrets "mysecret" not found
W0110 13:33:51.492] I0110 13:33:50.481961   53345 clientconn.go:551] parsed scheme: ""
W0110 13:33:51.493] I0110 13:33:50.482000   53345 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0110 13:33:51.493] I0110 13:33:50.482045   53345 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0110 13:33:51.493] I0110 13:33:50.482078   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:33:51.493] I0110 13:33:50.482436   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:33:51.493] No resources found.
W0110 13:33:51.493] No resources found.
I0110 13:33:51.594] Successful
I0110 13:33:51.594] message:Error from server (NotFound): secrets "mysecret" not found
I0110 13:33:51.594] has:secrets "mysecret" not found
I0110 13:33:51.594] Successful
I0110 13:33:51.595] message:user-specified
I0110 13:33:51.595] has:user-specified
I0110 13:33:51.610] Successful
I0110 13:33:51.681] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"5e2b49a4-14dc-11e9-9eb1-0242ac110002","resourceVersion":"876","creationTimestamp":"2019-01-10T13:33:51Z"}}
... skipping 80 lines ...
I0110 13:33:53.540] has:Timeout exceeded while reading body
I0110 13:33:53.615] Successful
I0110 13:33:53.615] message:NAME        READY   STATUS    RESTARTS   AGE
I0110 13:33:53.615] valid-pod   0/1     Pending   0          1s
I0110 13:33:53.615] has:valid-pod
I0110 13:33:53.680] Successful
I0110 13:33:53.680] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0110 13:33:53.680] has:Invalid timeout value
I0110 13:33:53.756] pod "valid-pod" deleted
I0110 13:33:53.776] +++ exit code: 0
I0110 13:33:53.809] Recording: run_crd_tests
I0110 13:33:53.809] Running command: run_crd_tests
I0110 13:33:53.827] 
... skipping 166 lines ...
I0110 13:33:57.966] foo.company.com/test patched
I0110 13:33:58.051] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0110 13:33:58.132] (Bfoo.company.com/test patched
I0110 13:33:58.220] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0110 13:33:58.298] (Bfoo.company.com/test patched
I0110 13:33:58.386] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0110 13:33:58.532] (B+++ [0110 13:33:58] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0110 13:33:58.593] {
I0110 13:33:58.593]     "apiVersion": "company.com/v1",
I0110 13:33:58.593]     "kind": "Foo",
I0110 13:33:58.593]     "metadata": {
I0110 13:33:58.593]         "annotations": {
I0110 13:33:58.594]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W0110 13:34:00.091] I0110 13:33:56.351538   53345 controller.go:606] quota admission added evaluator for: foos.company.com
W0110 13:34:00.091] I0110 13:33:59.730902   53345 controller.go:606] quota admission added evaluator for: bars.company.com
W0110 13:34:00.091] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 72303 Killed                  while [ ${tries} -lt 10 ]; do
W0110 13:34:00.091]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0110 13:34:00.091] done
W0110 13:34:00.091] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 72302 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0110 13:34:11.521] E0110 13:34:11.520322   56685 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
W0110 13:34:11.658] I0110 13:34:11.657480   56685 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0110 13:34:11.659] I0110 13:34:11.658676   53345 clientconn.go:551] parsed scheme: ""
W0110 13:34:11.659] I0110 13:34:11.658704   53345 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0110 13:34:11.659] I0110 13:34:11.658733   53345 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0110 13:34:11.659] I0110 13:34:11.658795   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:34:11.659] I0110 13:34:11.659358   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I0110 13:34:23.404] +++ [0110 13:34:23] Testing cmd with image
I0110 13:34:23.489] Successful
I0110 13:34:23.489] message:deployment.apps/test1 created
I0110 13:34:23.489] has:deployment.apps/test1 created
I0110 13:34:23.562] deployment.extensions "test1" deleted
I0110 13:34:23.634] Successful
I0110 13:34:23.635] message:error: Invalid image name "InvalidImageName": invalid reference format
I0110 13:34:23.635] has:error: Invalid image name "InvalidImageName": invalid reference format
I0110 13:34:23.649] +++ exit code: 0
I0110 13:34:23.685] Recording: run_recursive_resources_tests
I0110 13:34:23.685] Running command: run_recursive_resources_tests
I0110 13:34:23.704] 
I0110 13:34:23.706] +++ Running case: test-cmd.run_recursive_resources_tests 
I0110 13:34:23.708] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0110 13:34:23.858] Context "test" modified.
I0110 13:34:23.942] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:24.179] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:24.181] (BSuccessful
I0110 13:34:24.181] message:pod/busybox0 created
I0110 13:34:24.181] pod/busybox1 created
I0110 13:34:24.181] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0110 13:34:24.181] has:error validating data: kind not set
I0110 13:34:24.266] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:24.431] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0110 13:34:24.433] (BSuccessful
I0110 13:34:24.433] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:24.433] has:Object 'Kind' is missing
I0110 13:34:24.522] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:24.760] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0110 13:34:24.762] (BSuccessful
I0110 13:34:24.763] message:pod/busybox0 replaced
I0110 13:34:24.763] pod/busybox1 replaced
I0110 13:34:24.763] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0110 13:34:24.763] has:error validating data: kind not set
I0110 13:34:24.846] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:24.938] (BSuccessful
I0110 13:34:24.939] message:Name:               busybox0
I0110 13:34:24.939] Namespace:          namespace-1547127263-30024
I0110 13:34:24.939] Priority:           0
I0110 13:34:24.939] PriorityClassName:  <none>
... skipping 159 lines ...
I0110 13:34:24.953] has:Object 'Kind' is missing
I0110 13:34:25.031] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:25.196] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0110 13:34:25.198] (BSuccessful
I0110 13:34:25.198] message:pod/busybox0 annotated
I0110 13:34:25.199] pod/busybox1 annotated
I0110 13:34:25.199] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:25.199] has:Object 'Kind' is missing
I0110 13:34:25.289] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:25.541] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0110 13:34:25.545] (BSuccessful
I0110 13:34:25.545] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0110 13:34:25.545] pod/busybox0 configured
I0110 13:34:25.545] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0110 13:34:25.545] pod/busybox1 configured
I0110 13:34:25.546] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0110 13:34:25.546] has:error validating data: kind not set
I0110 13:34:25.630] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:25.772] (Bdeployment.apps/nginx created
I0110 13:34:25.868] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0110 13:34:25.954] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 13:34:26.116] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0110 13:34:26.119] (BSuccessful
... skipping 42 lines ...
I0110 13:34:26.195] deployment.extensions "nginx" deleted
I0110 13:34:26.294] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:26.449] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:26.450] (BSuccessful
I0110 13:34:26.450] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0110 13:34:26.450] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0110 13:34:26.451] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:26.451] has:Object 'Kind' is missing
I0110 13:34:26.537] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:26.619] (BSuccessful
I0110 13:34:26.619] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:26.619] has:busybox0:busybox1:
I0110 13:34:26.621] Successful
I0110 13:34:26.621] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:26.621] has:Object 'Kind' is missing
I0110 13:34:26.708] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:26.795] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:26.883] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0110 13:34:26.885] (BSuccessful
I0110 13:34:26.885] message:pod/busybox0 labeled
I0110 13:34:26.885] pod/busybox1 labeled
I0110 13:34:26.886] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:26.886] has:Object 'Kind' is missing
I0110 13:34:26.971] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:27.051] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:27.135] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0110 13:34:27.137] (BSuccessful
I0110 13:34:27.137] message:pod/busybox0 patched
I0110 13:34:27.137] pod/busybox1 patched
I0110 13:34:27.138] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:27.138] has:Object 'Kind' is missing
I0110 13:34:27.223] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:27.391] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:27.393] (BSuccessful
I0110 13:34:27.393] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0110 13:34:27.394] pod "busybox0" force deleted
I0110 13:34:27.394] pod "busybox1" force deleted
I0110 13:34:27.394] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0110 13:34:27.394] has:Object 'Kind' is missing
I0110 13:34:27.475] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:27.609] (Breplicationcontroller/busybox0 created
I0110 13:34:27.613] replicationcontroller/busybox1 created
I0110 13:34:27.706] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:27.793] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:27.877] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0110 13:34:27.963] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0110 13:34:28.132] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0110 13:34:28.214] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0110 13:34:28.216] (BSuccessful
I0110 13:34:28.216] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0110 13:34:28.216] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0110 13:34:28.216] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:28.216] has:Object 'Kind' is missing
I0110 13:34:28.291] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0110 13:34:28.370] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0110 13:34:28.460] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:28.544] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0110 13:34:28.625] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0110 13:34:28.801] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0110 13:34:28.887] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0110 13:34:28.889] (BSuccessful
I0110 13:34:28.889] message:service/busybox0 exposed
I0110 13:34:28.890] service/busybox1 exposed
I0110 13:34:28.890] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:28.890] has:Object 'Kind' is missing
I0110 13:34:28.971] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:29.053] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0110 13:34:29.141] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0110 13:34:29.322] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0110 13:34:29.407] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0110 13:34:29.409] (BSuccessful
I0110 13:34:29.410] message:replicationcontroller/busybox0 scaled
I0110 13:34:29.410] replicationcontroller/busybox1 scaled
I0110 13:34:29.410] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:29.410] has:Object 'Kind' is missing
I0110 13:34:29.494] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:29.660] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:29.663] (BSuccessful
I0110 13:34:29.663] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0110 13:34:29.663] replicationcontroller "busybox0" force deleted
I0110 13:34:29.664] replicationcontroller "busybox1" force deleted
I0110 13:34:29.664] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:29.664] has:Object 'Kind' is missing
I0110 13:34:29.748] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:29.889] (Bdeployment.apps/nginx1-deployment created
I0110 13:34:29.892] deployment.apps/nginx0-deployment created
I0110 13:34:29.992] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0110 13:34:30.078] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0110 13:34:30.269] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0110 13:34:30.271] (BSuccessful
I0110 13:34:30.272] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0110 13:34:30.272] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0110 13:34:30.272] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 13:34:30.272] has:Object 'Kind' is missing
I0110 13:34:30.357] deployment.apps/nginx1-deployment paused
I0110 13:34:30.360] deployment.apps/nginx0-deployment paused
I0110 13:34:30.456] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0110 13:34:30.458] (BSuccessful
I0110 13:34:30.459] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0110 13:34:30.738] 1         <none>
I0110 13:34:30.738] 
I0110 13:34:30.738] deployment.apps/nginx0-deployment 
I0110 13:34:30.738] REVISION  CHANGE-CAUSE
I0110 13:34:30.738] 1         <none>
I0110 13:34:30.738] 
I0110 13:34:30.739] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 13:34:30.739] has:nginx0-deployment
I0110 13:34:30.739] Successful
I0110 13:34:30.739] message:deployment.apps/nginx1-deployment 
I0110 13:34:30.739] REVISION  CHANGE-CAUSE
I0110 13:34:30.740] 1         <none>
I0110 13:34:30.740] 
I0110 13:34:30.740] deployment.apps/nginx0-deployment 
I0110 13:34:30.740] REVISION  CHANGE-CAUSE
I0110 13:34:30.740] 1         <none>
I0110 13:34:30.740] 
I0110 13:34:30.740] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 13:34:30.741] has:nginx1-deployment
I0110 13:34:30.741] Successful
I0110 13:34:30.741] message:deployment.apps/nginx1-deployment 
I0110 13:34:30.741] REVISION  CHANGE-CAUSE
I0110 13:34:30.741] 1         <none>
I0110 13:34:30.742] 
I0110 13:34:30.742] deployment.apps/nginx0-deployment 
I0110 13:34:30.742] REVISION  CHANGE-CAUSE
I0110 13:34:30.742] 1         <none>
I0110 13:34:30.742] 
I0110 13:34:30.742] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 13:34:30.743] has:Object 'Kind' is missing
I0110 13:34:30.810] deployment.apps "nginx1-deployment" force deleted
I0110 13:34:30.816] deployment.apps "nginx0-deployment" force deleted
W0110 13:34:30.917] Error from server (NotFound): namespaces "non-native-resources" not found
W0110 13:34:30.917] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0110 13:34:30.917] I0110 13:34:23.477713   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127263-25516", Name:"test1", UID:"711eed45-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"987", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W0110 13:34:30.918] I0110 13:34:23.482098   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127263-25516", Name:"test1-fb488bd5d", UID:"711f67e1-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-h76g7
W0110 13:34:30.918] I0110 13:34:25.774897   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127263-30024", Name:"nginx", UID:"727d64a5-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W0110 13:34:30.918] I0110 13:34:25.777430   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127263-30024", Name:"nginx-6f6bb85d9c", UID:"727de8ea-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-x5p5w
W0110 13:34:30.919] I0110 13:34:25.779892   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127263-30024", Name:"nginx-6f6bb85d9c", UID:"727de8ea-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-hrvnc
W0110 13:34:30.919] I0110 13:34:25.780161   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127263-30024", Name:"nginx-6f6bb85d9c", UID:"727de8ea-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-796cg
W0110 13:34:30.919] kubectl convert is DEPRECATED and will be removed in a future version.
W0110 13:34:30.919] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0110 13:34:30.920] I0110 13:34:27.611927   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127263-30024", Name:"busybox0", UID:"7395bf20-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1043", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-nzqft
W0110 13:34:30.920] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0110 13:34:30.920] I0110 13:34:27.614901   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127263-30024", Name:"busybox1", UID:"73966ce7-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1045", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-mmnct
W0110 13:34:30.920] I0110 13:34:27.646808   56685 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0110 13:34:30.921] I0110 13:34:29.228550   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127263-30024", Name:"busybox0", UID:"7395bf20-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-9gf9w
W0110 13:34:30.921] I0110 13:34:29.235545   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127263-30024", Name:"busybox1", UID:"73966ce7-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1069", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-ghk5n
W0110 13:34:30.921] I0110 13:34:29.891237   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127263-30024", Name:"nginx1-deployment", UID:"74f192c1-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0110 13:34:30.921] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0110 13:34:30.922] I0110 13:34:29.894364   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127263-30024", Name:"nginx1-deployment-75f6fc6747", UID:"74f2168d-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-7f6rc
W0110 13:34:30.922] I0110 13:34:29.897341   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127263-30024", Name:"nginx1-deployment-75f6fc6747", UID:"74f2168d-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1087", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-7kg5f
W0110 13:34:30.922] I0110 13:34:29.898061   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127263-30024", Name:"nginx0-deployment", UID:"74f23ff6-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1088", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0110 13:34:30.923] I0110 13:34:29.899104   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127263-30024", Name:"nginx0-deployment-b6bb4ccbb", UID:"74f2ee20-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-lk8hg
W0110 13:34:30.923] I0110 13:34:29.904057   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127263-30024", Name:"nginx0-deployment-b6bb4ccbb", UID:"74f2ee20-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-jkkpb
W0110 13:34:30.923] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 13:34:30.923] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0110 13:34:31.903] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:32.047] (Breplicationcontroller/busybox0 created
I0110 13:34:32.050] replicationcontroller/busybox1 created
I0110 13:34:32.148] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0110 13:34:32.236] (BSuccessful
I0110 13:34:32.236] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0110 13:34:32.239] message:no rollbacker has been implemented for "ReplicationController"
I0110 13:34:32.239] no rollbacker has been implemented for "ReplicationController"
I0110 13:34:32.239] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:32.240] has:Object 'Kind' is missing
I0110 13:34:32.323] Successful
I0110 13:34:32.324] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:32.324] error: replicationcontrollers "busybox0" pausing is not supported
I0110 13:34:32.324] error: replicationcontrollers "busybox1" pausing is not supported
I0110 13:34:32.324] has:Object 'Kind' is missing
I0110 13:34:32.325] Successful
I0110 13:34:32.326] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:32.326] error: replicationcontrollers "busybox0" pausing is not supported
I0110 13:34:32.326] error: replicationcontrollers "busybox1" pausing is not supported
I0110 13:34:32.326] has:replicationcontrollers "busybox0" pausing is not supported
I0110 13:34:32.327] Successful
I0110 13:34:32.327] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:32.327] error: replicationcontrollers "busybox0" pausing is not supported
I0110 13:34:32.327] error: replicationcontrollers "busybox1" pausing is not supported
I0110 13:34:32.328] has:replicationcontrollers "busybox1" pausing is not supported
I0110 13:34:32.412] Successful
I0110 13:34:32.412] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:32.412] error: replicationcontrollers "busybox0" resuming is not supported
I0110 13:34:32.413] error: replicationcontrollers "busybox1" resuming is not supported
I0110 13:34:32.413] has:Object 'Kind' is missing
I0110 13:34:32.414] Successful
I0110 13:34:32.414] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:32.414] error: replicationcontrollers "busybox0" resuming is not supported
I0110 13:34:32.414] error: replicationcontrollers "busybox1" resuming is not supported
I0110 13:34:32.415] has:replicationcontrollers "busybox0" resuming is not supported
I0110 13:34:32.416] Successful
I0110 13:34:32.416] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:32.416] error: replicationcontrollers "busybox0" resuming is not supported
I0110 13:34:32.417] error: replicationcontrollers "busybox1" resuming is not supported
I0110 13:34:32.417] has:replicationcontrollers "busybox0" resuming is not supported
I0110 13:34:32.486] replicationcontroller "busybox0" force deleted
I0110 13:34:32.490] replicationcontroller "busybox1" force deleted
W0110 13:34:32.591] I0110 13:34:32.049785   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127263-30024", Name:"busybox0", UID:"763ae63f-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1135", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-8z926
W0110 13:34:32.592] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0110 13:34:32.592] I0110 13:34:32.052538   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127263-30024", Name:"busybox1", UID:"763b95b0-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1137", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-tx5wg
W0110 13:34:32.592] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 13:34:32.592] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0110 13:34:33.510] +++ exit code: 0
I0110 13:34:33.559] Recording: run_namespace_tests
I0110 13:34:33.560] Running command: run_namespace_tests
I0110 13:34:33.578] 
I0110 13:34:33.580] +++ Running case: test-cmd.run_namespace_tests 
I0110 13:34:33.582] +++ working dir: /go/src/k8s.io/kubernetes
I0110 13:34:33.584] +++ command: run_namespace_tests
I0110 13:34:33.592] +++ [0110 13:34:33] Testing kubectl(v1:namespaces)
I0110 13:34:33.657] namespace/my-namespace created
I0110 13:34:33.739] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0110 13:34:33.810] (Bnamespace "my-namespace" deleted
I0110 13:34:38.922] namespace/my-namespace condition met
I0110 13:34:39.005] Successful
I0110 13:34:39.005] message:Error from server (NotFound): namespaces "my-namespace" not found
I0110 13:34:39.005] has: not found
I0110 13:34:39.115] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0110 13:34:39.176] (Bnamespace/other created
I0110 13:34:39.259] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0110 13:34:39.339] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:39.476] (Bpod/valid-pod created
I0110 13:34:39.564] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 13:34:39.645] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 13:34:39.717] (BSuccessful
I0110 13:34:39.717] message:error: a resource cannot be retrieved by name across all namespaces
I0110 13:34:39.717] has:a resource cannot be retrieved by name across all namespaces
I0110 13:34:39.799] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0110 13:34:39.871] (Bpod "valid-pod" force deleted
I0110 13:34:39.959] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:34:40.028] (Bnamespace "other" deleted
W0110 13:34:40.129] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0110 13:34:41.573] E0110 13:34:41.572464   56685 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0110 13:34:41.810] I0110 13:34:41.810148   56685 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0110 13:34:41.911] I0110 13:34:41.910504   56685 controller_utils.go:1028] Caches are synced for garbage collector controller
W0110 13:34:43.042] I0110 13:34:43.041840   56685 horizontal.go:313] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1547127263-30024
W0110 13:34:43.045] I0110 13:34:43.045338   56685 horizontal.go:313] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1547127263-30024
W0110 13:34:43.919] I0110 13:34:43.918505   56685 namespace_controller.go:171] Namespace has been deleted my-namespace
I0110 13:34:45.154] +++ exit code: 0
... skipping 113 lines ...
I0110 13:35:00.162] +++ command: run_client_config_tests
I0110 13:35:00.173] +++ [0110 13:35:00] Creating namespace namespace-1547127300-11542
I0110 13:35:00.239] namespace/namespace-1547127300-11542 created
I0110 13:35:00.305] Context "test" modified.
I0110 13:35:00.312] +++ [0110 13:35:00] Testing client config
I0110 13:35:00.376] Successful
I0110 13:35:00.377] message:error: stat missing: no such file or directory
I0110 13:35:00.377] has:missing: no such file or directory
I0110 13:35:00.439] Successful
I0110 13:35:00.440] message:error: stat missing: no such file or directory
I0110 13:35:00.440] has:missing: no such file or directory
I0110 13:35:00.503] Successful
I0110 13:35:00.504] message:error: stat missing: no such file or directory
I0110 13:35:00.504] has:missing: no such file or directory
I0110 13:35:00.565] Successful
I0110 13:35:00.566] message:Error in configuration: context was not found for specified context: missing-context
I0110 13:35:00.566] has:context was not found for specified context: missing-context
I0110 13:35:00.629] Successful
I0110 13:35:00.629] message:error: no server found for cluster "missing-cluster"
I0110 13:35:00.629] has:no server found for cluster "missing-cluster"
I0110 13:35:00.695] Successful
I0110 13:35:00.695] message:error: auth info "missing-user" does not exist
I0110 13:35:00.695] has:auth info "missing-user" does not exist
I0110 13:35:00.819] Successful
I0110 13:35:00.819] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0110 13:35:00.819] has:Error loading config file
I0110 13:35:00.882] Successful
I0110 13:35:00.883] message:error: stat missing-config: no such file or directory
I0110 13:35:00.883] has:no such file or directory
I0110 13:35:00.896] +++ exit code: 0
I0110 13:35:00.929] Recording: run_service_accounts_tests
I0110 13:35:00.929] Running command: run_service_accounts_tests
I0110 13:35:00.949] 
I0110 13:35:00.951] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0110 13:35:07.584] Labels:                        run=pi
I0110 13:35:07.584] Annotations:                   <none>
I0110 13:35:07.585] Schedule:                      59 23 31 2 *
I0110 13:35:07.585] Concurrency Policy:            Allow
I0110 13:35:07.585] Suspend:                       False
I0110 13:35:07.585] Successful Job History Limit:  824637758808
I0110 13:35:07.585] Failed Job History Limit:      1
I0110 13:35:07.585] Starting Deadline Seconds:     <unset>
I0110 13:35:07.585] Selector:                      <unset>
I0110 13:35:07.586] Parallelism:                   <unset>
I0110 13:35:07.586] Completions:                   <unset>
I0110 13:35:07.586] Pod Template:
I0110 13:35:07.586]   Labels:  run=pi
... skipping 31 lines ...
I0110 13:35:08.083]                 job-name=test-job
I0110 13:35:08.083]                 run=pi
I0110 13:35:08.083] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0110 13:35:08.083] Parallelism:    1
I0110 13:35:08.083] Completions:    1
I0110 13:35:08.083] Start Time:     Thu, 10 Jan 2019 13:35:07 +0000
I0110 13:35:08.083] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0110 13:35:08.083] Pod Template:
I0110 13:35:08.084]   Labels:  controller-uid=8b8f3edb-14dc-11e9-9eb1-0242ac110002
I0110 13:35:08.084]            job-name=test-job
I0110 13:35:08.084]            run=pi
I0110 13:35:08.084]   Containers:
I0110 13:35:08.084]    pi:
... skipping 329 lines ...
I0110 13:35:17.399]   selector:
I0110 13:35:17.399]     role: padawan
I0110 13:35:17.399]   sessionAffinity: None
I0110 13:35:17.399]   type: ClusterIP
I0110 13:35:17.399] status:
I0110 13:35:17.399]   loadBalancer: {}
W0110 13:35:17.500] error: you must specify resources by --filename when --local is set.
W0110 13:35:17.500] Example resource specifications include:
W0110 13:35:17.500]    '-f rsrc.yaml'
W0110 13:35:17.500]    '--filename=rsrc.json'
I0110 13:35:17.601] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0110 13:35:17.707] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0110 13:35:17.785] (Bservice "redis-master" deleted
... skipping 93 lines ...
I0110 13:35:23.332] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 13:35:23.418] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0110 13:35:23.515] (Bdaemonset.extensions/bind rolled back
I0110 13:35:23.606] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0110 13:35:23.693] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0110 13:35:23.789] (BSuccessful
I0110 13:35:23.789] message:error: unable to find specified revision 1000000 in history
I0110 13:35:23.789] has:unable to find specified revision
I0110 13:35:23.873] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0110 13:35:23.958] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0110 13:35:24.055] (Bdaemonset.extensions/bind rolled back
I0110 13:35:24.145] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0110 13:35:24.231] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0110 13:35:25.513] Namespace:    namespace-1547127324-10885
I0110 13:35:25.513] Selector:     app=guestbook,tier=frontend
I0110 13:35:25.513] Labels:       app=guestbook
I0110 13:35:25.514]               tier=frontend
I0110 13:35:25.514] Annotations:  <none>
I0110 13:35:25.514] Replicas:     3 current / 3 desired
I0110 13:35:25.514] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:25.514] Pod Template:
I0110 13:35:25.514]   Labels:  app=guestbook
I0110 13:35:25.514]            tier=frontend
I0110 13:35:25.514]   Containers:
I0110 13:35:25.514]    php-redis:
I0110 13:35:25.514]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0110 13:35:25.624] Namespace:    namespace-1547127324-10885
I0110 13:35:25.624] Selector:     app=guestbook,tier=frontend
I0110 13:35:25.624] Labels:       app=guestbook
I0110 13:35:25.625]               tier=frontend
I0110 13:35:25.625] Annotations:  <none>
I0110 13:35:25.625] Replicas:     3 current / 3 desired
I0110 13:35:25.625] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:25.625] Pod Template:
I0110 13:35:25.625]   Labels:  app=guestbook
I0110 13:35:25.625]            tier=frontend
I0110 13:35:25.625]   Containers:
I0110 13:35:25.625]    php-redis:
I0110 13:35:25.625]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0110 13:35:25.731] Namespace:    namespace-1547127324-10885
I0110 13:35:25.731] Selector:     app=guestbook,tier=frontend
I0110 13:35:25.731] Labels:       app=guestbook
I0110 13:35:25.732]               tier=frontend
I0110 13:35:25.732] Annotations:  <none>
I0110 13:35:25.732] Replicas:     3 current / 3 desired
I0110 13:35:25.732] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:25.732] Pod Template:
I0110 13:35:25.732]   Labels:  app=guestbook
I0110 13:35:25.732]            tier=frontend
I0110 13:35:25.732]   Containers:
I0110 13:35:25.732]    php-redis:
I0110 13:35:25.732]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 5 lines ...
I0110 13:35:25.733]     Environment:
I0110 13:35:25.733]       GET_HOSTS_FROM:  dns
I0110 13:35:25.733]     Mounts:            <none>
I0110 13:35:25.733]   Volumes:             <none>
I0110 13:35:25.733] (B
W0110 13:35:25.834] I0110 13:35:21.205578   53345 controller.go:606] quota admission added evaluator for: daemonsets.extensions
W0110 13:35:25.836] E0110 13:35:23.525120   56685 daemon_controller.go:302] namespace-1547127321-27871/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1547127321-27871", SelfLink:"/apis/apps/v1/namespaces/namespace-1547127321-27871/daemonsets/bind", UID:"9421ed78-14dc-11e9-9eb1-0242ac110002", ResourceVersion:"1350", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63682724122, loc:(*time.Location)(0x6962be0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1547127321-27871\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001917a00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0037595a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003f85e60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001917a40), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001277f20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc003759620)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0110 13:35:25.837] I0110 13:35:24.865285   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95b584f9-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f466g
W0110 13:35:25.837] I0110 13:35:24.867758   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95b584f9-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xr2mv
W0110 13:35:25.837] I0110 13:35:24.868652   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95b584f9-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f5n9j
W0110 13:35:25.838] I0110 13:35:25.270361   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95f39d90-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x4qvw
W0110 13:35:25.838] I0110 13:35:25.273180   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95f39d90-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2c95s
W0110 13:35:25.838] I0110 13:35:25.274224   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95f39d90-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1378", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5jnqr
... skipping 2 lines ...
I0110 13:35:25.939] Namespace:    namespace-1547127324-10885
I0110 13:35:25.939] Selector:     app=guestbook,tier=frontend
I0110 13:35:25.939] Labels:       app=guestbook
I0110 13:35:25.939]               tier=frontend
I0110 13:35:25.939] Annotations:  <none>
I0110 13:35:25.939] Replicas:     3 current / 3 desired
I0110 13:35:25.939] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:25.940] Pod Template:
I0110 13:35:25.940]   Labels:  app=guestbook
I0110 13:35:25.940]            tier=frontend
I0110 13:35:25.940]   Containers:
I0110 13:35:25.940]    php-redis:
I0110 13:35:25.940]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0110 13:35:25.978] Namespace:    namespace-1547127324-10885
I0110 13:35:25.978] Selector:     app=guestbook,tier=frontend
I0110 13:35:25.978] Labels:       app=guestbook
I0110 13:35:25.978]               tier=frontend
I0110 13:35:25.978] Annotations:  <none>
I0110 13:35:25.979] Replicas:     3 current / 3 desired
I0110 13:35:25.979] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:25.979] Pod Template:
I0110 13:35:25.979]   Labels:  app=guestbook
I0110 13:35:25.979]            tier=frontend
I0110 13:35:25.979]   Containers:
I0110 13:35:25.979]    php-redis:
I0110 13:35:25.979]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0110 13:35:26.086] Namespace:    namespace-1547127324-10885
I0110 13:35:26.087] Selector:     app=guestbook,tier=frontend
I0110 13:35:26.087] Labels:       app=guestbook
I0110 13:35:26.087]               tier=frontend
I0110 13:35:26.087] Annotations:  <none>
I0110 13:35:26.087] Replicas:     3 current / 3 desired
I0110 13:35:26.087] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:26.087] Pod Template:
I0110 13:35:26.087]   Labels:  app=guestbook
I0110 13:35:26.088]            tier=frontend
I0110 13:35:26.088]   Containers:
I0110 13:35:26.088]    php-redis:
I0110 13:35:26.088]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0110 13:35:26.194] Namespace:    namespace-1547127324-10885
I0110 13:35:26.194] Selector:     app=guestbook,tier=frontend
I0110 13:35:26.194] Labels:       app=guestbook
I0110 13:35:26.194]               tier=frontend
I0110 13:35:26.194] Annotations:  <none>
I0110 13:35:26.194] Replicas:     3 current / 3 desired
I0110 13:35:26.195] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:26.195] Pod Template:
I0110 13:35:26.195]   Labels:  app=guestbook
I0110 13:35:26.195]            tier=frontend
I0110 13:35:26.195]   Containers:
I0110 13:35:26.195]    php-redis:
I0110 13:35:26.195]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0110 13:35:26.312] Namespace:    namespace-1547127324-10885
I0110 13:35:26.313] Selector:     app=guestbook,tier=frontend
I0110 13:35:26.313] Labels:       app=guestbook
I0110 13:35:26.313]               tier=frontend
I0110 13:35:26.313] Annotations:  <none>
I0110 13:35:26.313] Replicas:     3 current / 3 desired
I0110 13:35:26.313] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:26.313] Pod Template:
I0110 13:35:26.313]   Labels:  app=guestbook
I0110 13:35:26.313]            tier=frontend
I0110 13:35:26.313]   Containers:
I0110 13:35:26.313]    php-redis:
I0110 13:35:26.313]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0110 13:35:27.145] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0110 13:35:27.234] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0110 13:35:27.320] (Breplicationcontroller/frontend scaled
I0110 13:35:27.415] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0110 13:35:27.494] (Breplicationcontroller "frontend" deleted
W0110 13:35:27.595] I0110 13:35:26.514514   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95f39d90-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1388", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-2c95s
W0110 13:35:27.595] error: Expected replicas to be 3, was 2
W0110 13:35:27.596] I0110 13:35:27.055526   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95f39d90-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1394", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8xcw9
W0110 13:35:27.596] I0110 13:35:27.325120   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"95f39d90-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1399", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-8xcw9
W0110 13:35:27.651] I0110 13:35:27.650808   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"redis-master", UID:"975ed3c5-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-pncth
I0110 13:35:27.752] replicationcontroller/redis-master created
I0110 13:35:27.801] replicationcontroller/redis-slave created
W0110 13:35:27.901] I0110 13:35:27.804375   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"redis-slave", UID:"97763f46-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-rv4mt
... skipping 36 lines ...
I0110 13:35:29.371] service "expose-test-deployment" deleted
I0110 13:35:29.465] Successful
I0110 13:35:29.466] message:service/expose-test-deployment exposed
I0110 13:35:29.466] has:service/expose-test-deployment exposed
I0110 13:35:29.543] service "expose-test-deployment" deleted
I0110 13:35:29.634] Successful
I0110 13:35:29.634] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0110 13:35:29.634] See 'kubectl expose -h' for help and examples
I0110 13:35:29.634] has:invalid deployment: no selectors
I0110 13:35:29.716] Successful
I0110 13:35:29.717] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0110 13:35:29.717] See 'kubectl expose -h' for help and examples
I0110 13:35:29.717] has:invalid deployment: no selectors
I0110 13:35:29.869] deployment.apps/nginx-deployment created
I0110 13:35:29.967] core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0110 13:35:30.059] (Bservice/nginx-deployment exposed
I0110 13:35:30.153] core.sh:1137: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
... skipping 23 lines ...
I0110 13:35:31.667] service "frontend" deleted
I0110 13:35:31.673] service "frontend-2" deleted
I0110 13:35:31.680] service "frontend-3" deleted
I0110 13:35:31.686] service "frontend-4" deleted
I0110 13:35:31.692] service "frontend-5" deleted
I0110 13:35:31.783] Successful
I0110 13:35:31.783] message:error: cannot expose a Node
I0110 13:35:31.783] has:cannot expose
I0110 13:35:31.865] Successful
I0110 13:35:31.865] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0110 13:35:31.866] has:metadata.name: Invalid value
I0110 13:35:31.952] Successful
I0110 13:35:31.953] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0110 13:35:34.012] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0110 13:35:34.099] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0110 13:35:34.173] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0110 13:35:34.274] I0110 13:35:33.593877   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"9ae9bf55-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f47f5
W0110 13:35:34.274] I0110 13:35:33.596067   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"9ae9bf55-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9wj4q
W0110 13:35:34.275] I0110 13:35:33.596177   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127324-10885", Name:"frontend", UID:"9ae9bf55-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"1635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-562k5
W0110 13:35:34.275] Error: required flag(s) "max" not set
W0110 13:35:34.275] 
W0110 13:35:34.275] 
W0110 13:35:34.275] Examples:
W0110 13:35:34.275]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0110 13:35:34.275]   kubectl autoscale deployment foo --min=2 --max=10
W0110 13:35:34.275]   
... skipping 54 lines ...
I0110 13:35:34.478]           limits:
I0110 13:35:34.478]             cpu: 300m
I0110 13:35:34.478]           requests:
I0110 13:35:34.478]             cpu: 300m
I0110 13:35:34.479]       terminationGracePeriodSeconds: 0
I0110 13:35:34.479] status: {}
W0110 13:35:34.579] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0110 13:35:34.710] deployment.apps/nginx-deployment-resources created
I0110 13:35:34.806] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0110 13:35:34.893] (Bcore.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 13:35:34.980] (Bcore.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0110 13:35:35.069] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0110 13:35:35.172] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 85 lines ...
W0110 13:35:36.186] I0110 13:35:34.713120   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources", UID:"9b946f79-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0110 13:35:36.187] I0110 13:35:34.716217   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources-69c96fd869", UID:"9b950bf6-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-krrnj
W0110 13:35:36.187] I0110 13:35:34.718103   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources-69c96fd869", UID:"9b950bf6-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-94lzr
W0110 13:35:36.188] I0110 13:35:34.718321   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources-69c96fd869", UID:"9b950bf6-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1657", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-9cgx8
W0110 13:35:36.188] I0110 13:35:35.072651   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources", UID:"9b946f79-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W0110 13:35:36.188] I0110 13:35:35.075520   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources-6c5996c457", UID:"9bcbe635-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-774lf
W0110 13:35:36.188] error: unable to find container named redis
W0110 13:35:36.189] I0110 13:35:35.440318   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources", UID:"9b946f79-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1680", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0110 13:35:36.189] I0110 13:35:35.444489   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources-69c96fd869", UID:"9b950bf6-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1684", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-krrnj
W0110 13:35:36.189] I0110 13:35:35.446589   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources", UID:"9b946f79-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1683", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0110 13:35:36.189] I0110 13:35:35.451753   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources-5f4579485f", UID:"9c03284e-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-9vtq5
W0110 13:35:36.190] I0110 13:35:35.703618   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources", UID:"9b946f79-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 1
W0110 13:35:36.190] I0110 13:35:35.709246   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources-69c96fd869", UID:"9b950bf6-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1704", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-94lzr
W0110 13:35:36.190] I0110 13:35:35.709836   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources", UID:"9b946f79-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1703", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-ff8d89cb6 to 1
W0110 13:35:36.191] I0110 13:35:35.713218   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127324-10885", Name:"nginx-deployment-resources-ff8d89cb6", UID:"9c2b4804-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1708", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-ff8d89cb6-vprb8
W0110 13:35:36.191] error: you must specify resources by --filename when --local is set.
W0110 13:35:36.191] Example resource specifications include:
W0110 13:35:36.191]    '-f rsrc.yaml'
W0110 13:35:36.191]    '--filename=rsrc.json'
I0110 13:35:36.291] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0110 13:35:36.328] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0110 13:35:36.417] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0110 13:35:37.844]                 pod-template-hash=55c9b846cc
I0110 13:35:37.844] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0110 13:35:37.844]                 deployment.kubernetes.io/max-replicas: 2
I0110 13:35:37.844]                 deployment.kubernetes.io/revision: 1
I0110 13:35:37.845] Controlled By:  Deployment/test-nginx-apps
I0110 13:35:37.845] Replicas:       1 current / 1 desired
I0110 13:35:37.845] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:37.845] Pod Template:
I0110 13:35:37.845]   Labels:  app=test-nginx-apps
I0110 13:35:37.845]            pod-template-hash=55c9b846cc
I0110 13:35:37.845]   Containers:
I0110 13:35:37.845]    nginx:
I0110 13:35:37.845]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
I0110 13:35:41.711] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0110 13:35:41.806] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0110 13:35:41.918] (Bdeployment.extensions/nginx rolled back
I0110 13:35:43.030] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 13:35:43.238] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 13:35:43.351] (Bdeployment.extensions/nginx rolled back
W0110 13:35:43.452] error: unable to find specified revision 1000000 in history
I0110 13:35:44.462] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0110 13:35:44.568] (Bdeployment.extensions/nginx paused
W0110 13:35:44.683] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0110 13:35:44.789] deployment.extensions/nginx resumed
I0110 13:35:44.913] deployment.extensions/nginx rolled back
I0110 13:35:45.118]     deployment.kubernetes.io/revision-history: 1,3
W0110 13:35:45.314] error: desired revision (3) is different from the running revision (5)
I0110 13:35:45.476] deployment.apps/nginx2 created
I0110 13:35:45.569] deployment.extensions "nginx2" deleted
I0110 13:35:45.665] deployment.extensions "nginx" deleted
W0110 13:35:45.765] I0110 13:35:45.480899   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127336-14936", Name:"nginx2", UID:"a1ff466a-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1903", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-6b58f7cc65 to 3
W0110 13:35:45.766] I0110 13:35:45.484020   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127336-14936", Name:"nginx2-6b58f7cc65", UID:"a1fff9b9-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1904", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-6b58f7cc65-qgfd5
W0110 13:35:45.766] I0110 13:35:45.490238   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127336-14936", Name:"nginx2-6b58f7cc65", UID:"a1fff9b9-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1904", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-6b58f7cc65-8j4tv
... skipping 18 lines ...
I0110 13:35:47.069] (Bdeployment.apps/nginx-deployment image updated
I0110 13:35:47.169] apps.sh:347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0110 13:35:47.264] (Bapps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0110 13:35:47.436] (Bapps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0110 13:35:47.528] (Bapps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0110 13:35:47.621] (Bdeployment.extensions/nginx-deployment image updated
W0110 13:35:47.722] error: unable to find container named "redis"
W0110 13:35:47.723] I0110 13:35:47.631818   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127336-14936", Name:"nginx-deployment", UID:"a2453b46-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1969", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W0110 13:35:47.723] I0110 13:35:47.636501   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127336-14936", Name:"nginx-deployment-646d4f779d", UID:"a245de3a-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1973", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-zpnpm
W0110 13:35:47.723] I0110 13:35:47.638285   56685 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547127336-14936", Name:"nginx-deployment", UID:"a2453b46-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1972", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 1
W0110 13:35:47.724] I0110 13:35:47.640269   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127336-14936", Name:"nginx-deployment-dc756cc6", UID:"a3473672-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-ng8l7
I0110 13:35:47.824] apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0110 13:35:47.828] (Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 77 lines ...
I0110 13:35:52.293] Namespace:    namespace-1547127350-10426
I0110 13:35:52.294] Selector:     app=guestbook,tier=frontend
I0110 13:35:52.294] Labels:       app=guestbook
I0110 13:35:52.294]               tier=frontend
I0110 13:35:52.294] Annotations:  <none>
I0110 13:35:52.294] Replicas:     3 current / 3 desired
I0110 13:35:52.294] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:52.294] Pod Template:
I0110 13:35:52.294]   Labels:  app=guestbook
I0110 13:35:52.295]            tier=frontend
I0110 13:35:52.295]   Containers:
I0110 13:35:52.295]    php-redis:
I0110 13:35:52.295]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0110 13:35:52.399] Namespace:    namespace-1547127350-10426
I0110 13:35:52.399] Selector:     app=guestbook,tier=frontend
I0110 13:35:52.399] Labels:       app=guestbook
I0110 13:35:52.399]               tier=frontend
I0110 13:35:52.399] Annotations:  <none>
I0110 13:35:52.400] Replicas:     3 current / 3 desired
I0110 13:35:52.400] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:52.400] Pod Template:
I0110 13:35:52.400]   Labels:  app=guestbook
I0110 13:35:52.400]            tier=frontend
I0110 13:35:52.400]   Containers:
I0110 13:35:52.400]    php-redis:
I0110 13:35:52.400]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0110 13:35:52.499] Namespace:    namespace-1547127350-10426
I0110 13:35:52.499] Selector:     app=guestbook,tier=frontend
I0110 13:35:52.499] Labels:       app=guestbook
I0110 13:35:52.499]               tier=frontend
I0110 13:35:52.500] Annotations:  <none>
I0110 13:35:52.500] Replicas:     3 current / 3 desired
I0110 13:35:52.500] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:52.500] Pod Template:
I0110 13:35:52.500]   Labels:  app=guestbook
I0110 13:35:52.500]            tier=frontend
I0110 13:35:52.500]   Containers:
I0110 13:35:52.500]    php-redis:
I0110 13:35:52.501]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0110 13:35:52.606] Namespace:    namespace-1547127350-10426
I0110 13:35:52.606] Selector:     app=guestbook,tier=frontend
I0110 13:35:52.607] Labels:       app=guestbook
I0110 13:35:52.607]               tier=frontend
I0110 13:35:52.607] Annotations:  <none>
I0110 13:35:52.607] Replicas:     3 current / 3 desired
I0110 13:35:52.607] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:52.607] Pod Template:
I0110 13:35:52.607]   Labels:  app=guestbook
I0110 13:35:52.607]            tier=frontend
I0110 13:35:52.608]   Containers:
I0110 13:35:52.608]    php-redis:
I0110 13:35:52.608]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
I0110 13:35:52.609]   Type    Reason            Age   From                   Message
I0110 13:35:52.609]   ----    ------            ----  ----                   -------
I0110 13:35:52.609]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-qq2bb
I0110 13:35:52.610]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-vxm2w
I0110 13:35:52.610]   Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-7pcx2
I0110 13:35:52.610] (B
W0110 13:35:52.711] E0110 13:35:50.304980   56685 replica_set.go:450] Sync "namespace-1547127336-14936/nginx-deployment-669d4f8fc9" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-669d4f8fc9": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1547127336-14936/nginx-deployment-669d4f8fc9, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a49f14c0-14dc-11e9-9eb1-0242ac110002, UID in object meta: 
W0110 13:35:52.711] E0110 13:35:50.404521   56685 replica_set.go:450] Sync "namespace-1547127336-14936/nginx-deployment-5766b7c95b" failed with replicasets.apps "nginx-deployment-5766b7c95b" not found
W0110 13:35:52.712] E0110 13:35:50.454663   56685 replica_set.go:450] Sync "namespace-1547127336-14936/nginx-deployment-7b8f7659b7" failed with replicasets.apps "nginx-deployment-7b8f7659b7" not found
W0110 13:35:52.712] E0110 13:35:50.504479   56685 replica_set.go:450] Sync "namespace-1547127336-14936/nginx-deployment-75bf89d86f" failed with replicasets.apps "nginx-deployment-75bf89d86f" not found
W0110 13:35:52.712] I0110 13:35:50.907796   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a53b10c2-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2123", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tkch7
W0110 13:35:52.713] I0110 13:35:50.910486   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a53b10c2-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2123", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jfl8k
W0110 13:35:52.713] I0110 13:35:50.911154   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a53b10c2-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2123", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8bqmc
W0110 13:35:52.713] I0110 13:35:51.313213   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend-no-cascade", UID:"a5799431-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-wvvpp
W0110 13:35:52.714] I0110 13:35:51.315405   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend-no-cascade", UID:"a5799431-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-gf62g
W0110 13:35:52.714] I0110 13:35:51.316300   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend-no-cascade", UID:"a5799431-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-wrcmh
W0110 13:35:52.714] E0110 13:35:51.554349   56685 replica_set.go:450] Sync "namespace-1547127350-10426/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W0110 13:35:52.715] I0110 13:35:52.073156   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a5ed84f1-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2161", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qq2bb
W0110 13:35:52.715] I0110 13:35:52.075391   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a5ed84f1-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2161", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vxm2w
W0110 13:35:52.715] I0110 13:35:52.075723   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a5ed84f1-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2161", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7pcx2
I0110 13:35:52.816] Successful describe rs:
I0110 13:35:52.816] Name:         frontend
I0110 13:35:52.816] Namespace:    namespace-1547127350-10426
I0110 13:35:52.816] Selector:     app=guestbook,tier=frontend
I0110 13:35:52.816] Labels:       app=guestbook
I0110 13:35:52.817]               tier=frontend
I0110 13:35:52.817] Annotations:  <none>
I0110 13:35:52.817] Replicas:     3 current / 3 desired
I0110 13:35:52.817] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:52.817] Pod Template:
I0110 13:35:52.817]   Labels:  app=guestbook
I0110 13:35:52.817]            tier=frontend
I0110 13:35:52.817]   Containers:
I0110 13:35:52.817]    php-redis:
I0110 13:35:52.818]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0110 13:35:52.844] Namespace:    namespace-1547127350-10426
I0110 13:35:52.844] Selector:     app=guestbook,tier=frontend
I0110 13:35:52.844] Labels:       app=guestbook
I0110 13:35:52.844]               tier=frontend
I0110 13:35:52.844] Annotations:  <none>
I0110 13:35:52.845] Replicas:     3 current / 3 desired
I0110 13:35:52.845] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:52.845] Pod Template:
I0110 13:35:52.845]   Labels:  app=guestbook
I0110 13:35:52.845]            tier=frontend
I0110 13:35:52.845]   Containers:
I0110 13:35:52.845]    php-redis:
I0110 13:35:52.846]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0110 13:35:52.947] Namespace:    namespace-1547127350-10426
I0110 13:35:52.947] Selector:     app=guestbook,tier=frontend
I0110 13:35:52.947] Labels:       app=guestbook
I0110 13:35:52.947]               tier=frontend
I0110 13:35:52.947] Annotations:  <none>
I0110 13:35:52.948] Replicas:     3 current / 3 desired
I0110 13:35:52.948] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:52.948] Pod Template:
I0110 13:35:52.948]   Labels:  app=guestbook
I0110 13:35:52.948]            tier=frontend
I0110 13:35:52.948]   Containers:
I0110 13:35:52.948]    php-redis:
I0110 13:35:52.948]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0110 13:35:53.053] Namespace:    namespace-1547127350-10426
I0110 13:35:53.053] Selector:     app=guestbook,tier=frontend
I0110 13:35:53.053] Labels:       app=guestbook
I0110 13:35:53.053]               tier=frontend
I0110 13:35:53.053] Annotations:  <none>
I0110 13:35:53.053] Replicas:     3 current / 3 desired
I0110 13:35:53.053] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0110 13:35:53.054] Pod Template:
I0110 13:35:53.054]   Labels:  app=guestbook
I0110 13:35:53.054]            tier=frontend
I0110 13:35:53.054]   Containers:
I0110 13:35:53.054]    php-redis:
I0110 13:35:53.054]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0110 13:35:58.029] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0110 13:35:58.108] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0110 13:35:58.177] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0110 13:35:58.278] I0110 13:35:57.640291   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a93f1e01-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pkv8g
W0110 13:35:58.278] I0110 13:35:57.642536   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a93f1e01-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nd4mw
W0110 13:35:58.279] I0110 13:35:57.642771   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547127350-10426", Name:"frontend", UID:"a93f1e01-14dc-11e9-9eb1-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n6nw2
W0110 13:35:58.279] Error: required flag(s) "max" not set
W0110 13:35:58.279] 
W0110 13:35:58.279] 
W0110 13:35:58.279] Examples:
W0110 13:35:58.279]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0110 13:35:58.279]   kubectl autoscale deployment foo --min=2 --max=10
W0110 13:35:58.279]   
... skipping 85 lines ...
I0110 13:36:00.836] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0110 13:36:00.915] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0110 13:36:01.010] (Bstatefulset.apps/nginx rolled back
I0110 13:36:01.097] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0110 13:36:01.180] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0110 13:36:01.271] (BSuccessful
I0110 13:36:01.272] message:error: unable to find specified revision 1000000 in history
I0110 13:36:01.272] has:unable to find specified revision
I0110 13:36:01.350] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0110 13:36:01.430] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0110 13:36:01.520] (Bstatefulset.apps/nginx rolled back
I0110 13:36:01.602] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0110 13:36:01.679] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 61 lines ...
I0110 13:36:03.271] Name:         mock
I0110 13:36:03.271] Namespace:    namespace-1547127362-16347
I0110 13:36:03.271] Selector:     app=mock
I0110 13:36:03.271] Labels:       app=mock
I0110 13:36:03.271] Annotations:  <none>
I0110 13:36:03.271] Replicas:     1 current / 1 desired
I0110 13:36:03.271] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 13:36:03.271] Pod Template:
I0110 13:36:03.272]   Labels:  app=mock
I0110 13:36:03.272]   Containers:
I0110 13:36:03.272]    mock-container:
I0110 13:36:03.272]     Image:        k8s.gcr.io/pause:2.0
I0110 13:36:03.272]     Port:         9949/TCP
... skipping 56 lines ...
I0110 13:36:05.201] Name:         mock
I0110 13:36:05.201] Namespace:    namespace-1547127362-16347
I0110 13:36:05.201] Selector:     app=mock
I0110 13:36:05.201] Labels:       app=mock
I0110 13:36:05.201] Annotations:  <none>
I0110 13:36:05.201] Replicas:     1 current / 1 desired
I0110 13:36:05.202] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 13:36:05.202] Pod Template:
I0110 13:36:05.202]   Labels:  app=mock
I0110 13:36:05.202]   Containers:
I0110 13:36:05.202]    mock-container:
I0110 13:36:05.202]     Image:        k8s.gcr.io/pause:2.0
I0110 13:36:05.202]     Port:         9949/TCP
... skipping 56 lines ...
I0110 13:36:07.237] Name:         mock
I0110 13:36:07.237] Namespace:    namespace-1547127362-16347
I0110 13:36:07.237] Selector:     app=mock
I0110 13:36:07.237] Labels:       app=mock
I0110 13:36:07.237] Annotations:  <none>
I0110 13:36:07.237] Replicas:     1 current / 1 desired
I0110 13:36:07.237] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 13:36:07.237] Pod Template:
I0110 13:36:07.237]   Labels:  app=mock
I0110 13:36:07.238]   Containers:
I0110 13:36:07.238]    mock-container:
I0110 13:36:07.238]     Image:        k8s.gcr.io/pause:2.0
I0110 13:36:07.238]     Port:         9949/TCP
... skipping 42 lines ...
I0110 13:36:09.211] Namespace:    namespace-1547127362-16347
I0110 13:36:09.211] Selector:     app=mock
I0110 13:36:09.211] Labels:       app=mock
I0110 13:36:09.211]               status=replaced
I0110 13:36:09.211] Annotations:  <none>
I0110 13:36:09.211] Replicas:     1 current / 1 desired
I0110 13:36:09.212] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 13:36:09.212] Pod Template:
I0110 13:36:09.212]   Labels:  app=mock
I0110 13:36:09.212]   Containers:
I0110 13:36:09.212]    mock-container:
I0110 13:36:09.212]     Image:        k8s.gcr.io/pause:2.0
I0110 13:36:09.212]     Port:         9949/TCP
... skipping 11 lines ...
I0110 13:36:09.214] Namespace:    namespace-1547127362-16347
I0110 13:36:09.214] Selector:     app=mock2
I0110 13:36:09.214] Labels:       app=mock2
I0110 13:36:09.214]               status=replaced
I0110 13:36:09.214] Annotations:  <none>
I0110 13:36:09.215] Replicas:     1 current / 1 desired
I0110 13:36:09.215] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0110 13:36:09.215] Pod Template:
I0110 13:36:09.215]   Labels:  app=mock2
I0110 13:36:09.215]   Containers:
I0110 13:36:09.215]    mock-container:
I0110 13:36:09.215]     Image:        k8s.gcr.io/pause:2.0
I0110 13:36:09.215]     Port:         9949/TCP
... skipping 105 lines ...
I0110 13:36:13.754] Context "test" modified.
I0110 13:36:13.760] +++ [0110 13:36:13] Testing persistent volumes
I0110 13:36:13.843] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:36:13.995] (Bpersistentvolume/pv0001 created
W0110 13:36:14.095] I0110 13:36:12.793704   56685 horizontal.go:313] Horizontal Pod Autoscaler frontend has been deleted in namespace-1547127350-10426
W0110 13:36:14.096] I0110 13:36:12.937878   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127362-16347", Name:"mock", UID:"b25d66e9-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"2619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-zc2sv
W0110 13:36:14.096] E0110 13:36:14.000059   56685 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0110 13:36:14.197] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0110 13:36:14.197] (Bpersistentvolume "pv0001" deleted
I0110 13:36:14.326] persistentvolume/pv0002 created
I0110 13:36:14.416] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0110 13:36:14.490] (Bpersistentvolume "pv0002" deleted
W0110 13:36:14.590] E0110 13:36:14.328157   56685 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
W0110 13:36:14.643] E0110 13:36:14.642661   56685 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0110 13:36:14.743] persistentvolume/pv0003 created
I0110 13:36:14.744] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0110 13:36:14.812] (Bpersistentvolume "pv0003" deleted
I0110 13:36:14.906] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0110 13:36:14.919] (B+++ exit code: 0
I0110 13:36:14.953] Recording: run_persistent_volume_claims_tests
... skipping 467 lines ...
I0110 13:36:19.261] yes
I0110 13:36:19.261] has:the server doesn't have a resource type
I0110 13:36:19.335] Successful
I0110 13:36:19.336] message:yes
I0110 13:36:19.336] has:yes
I0110 13:36:19.407] Successful
I0110 13:36:19.408] message:error: --subresource can not be used with NonResourceURL
I0110 13:36:19.408] has:subresource can not be used with NonResourceURL
I0110 13:36:19.484] Successful
I0110 13:36:19.562] Successful
I0110 13:36:19.563] message:yes
I0110 13:36:19.563] 0
I0110 13:36:19.563] has:0
... skipping 6 lines ...
I0110 13:36:19.744] role.rbac.authorization.k8s.io/testing-R reconciled
I0110 13:36:19.837] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0110 13:36:19.928] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0110 13:36:20.018] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0110 13:36:20.107] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0110 13:36:20.187] (BSuccessful
I0110 13:36:20.187] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0110 13:36:20.187] has:only rbac.authorization.k8s.io/v1 is supported
I0110 13:36:20.273] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0110 13:36:20.278] role.rbac.authorization.k8s.io "testing-R" deleted
I0110 13:36:20.286] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0110 13:36:20.293] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0110 13:36:20.303] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I0110 13:36:21.371] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0110 13:36:21.373] +++ working dir: /go/src/k8s.io/kubernetes
I0110 13:36:21.375] +++ command: run_kubectl_explain_tests
I0110 13:36:21.384] +++ [0110 13:36:21] Testing kubectl(v1:explain)
W0110 13:36:21.485] I0110 13:36:21.262315   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127380-16663", Name:"cassandra", UID:"b718bd14-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"2699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-d4tk8
W0110 13:36:21.486] I0110 13:36:21.269879   56685 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547127380-16663", Name:"cassandra", UID:"b718bd14-14dc-11e9-9eb1-0242ac110002", APIVersion:"v1", ResourceVersion:"2709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-6rt2b
W0110 13:36:21.486] E0110 13:36:21.275371   56685 replica_set.go:450] Sync "namespace-1547127380-16663/cassandra" failed with replicationcontrollers "cassandra" not found
I0110 13:36:21.586] KIND:     Pod
I0110 13:36:21.587] VERSION:  v1
I0110 13:36:21.587] 
I0110 13:36:21.587] DESCRIPTION:
I0110 13:36:21.587]      Pod is a collection of containers that can run on a host. This resource is
I0110 13:36:21.587]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0110 13:36:47.129] message:node/127.0.0.1 already uncordoned (dry run)
I0110 13:36:47.129] has:already uncordoned
I0110 13:36:47.213] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0110 13:36:47.289] (Bnode/127.0.0.1 labeled
I0110 13:36:47.377] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0110 13:36:47.442] (BSuccessful
I0110 13:36:47.442] message:error: cannot specify both a node name and a --selector option
I0110 13:36:47.443] See 'kubectl drain -h' for help and examples
I0110 13:36:47.443] has:cannot specify both a node name
I0110 13:36:47.508] Successful
I0110 13:36:47.508] message:error: USAGE: cordon NODE [flags]
I0110 13:36:47.508] See 'kubectl cordon -h' for help and examples
I0110 13:36:47.508] has:error\: USAGE\: cordon NODE
I0110 13:36:47.584] node/127.0.0.1 already uncordoned
I0110 13:36:47.655] Successful
I0110 13:36:47.655] message:error: You must provide one or more resources by argument or filename.
I0110 13:36:47.655] Example resource specifications include:
I0110 13:36:47.655]    '-f rsrc.yaml'
I0110 13:36:47.655]    '--filename=rsrc.json'
I0110 13:36:47.655]    '<resource> <name>'
I0110 13:36:47.655]    '<resource>'
I0110 13:36:47.656] has:must provide one or more resources
... skipping 15 lines ...
I0110 13:36:48.079] Successful
I0110 13:36:48.079] message:The following kubectl-compatible plugins are available:
I0110 13:36:48.079] 
I0110 13:36:48.079] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0110 13:36:48.080]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0110 13:36:48.080] 
I0110 13:36:48.080] error: one plugin warning was found
I0110 13:36:48.080] has:kubectl-version overwrites existing command: "kubectl version"
I0110 13:36:48.152] Successful
I0110 13:36:48.152] message:The following kubectl-compatible plugins are available:
I0110 13:36:48.152] 
I0110 13:36:48.152] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0110 13:36:48.152] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0110 13:36:48.152]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0110 13:36:48.152] 
I0110 13:36:48.153] error: one plugin warning was found
I0110 13:36:48.153] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0110 13:36:48.224] Successful
I0110 13:36:48.224] message:The following kubectl-compatible plugins are available:
I0110 13:36:48.224] 
I0110 13:36:48.224] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0110 13:36:48.225] has:plugins are available
I0110 13:36:48.291] Successful
I0110 13:36:48.292] message:
I0110 13:36:48.292] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0110 13:36:48.292] error: unable to find any kubectl plugins in your PATH
I0110 13:36:48.292] has:unable to find any kubectl plugins in your PATH
I0110 13:36:48.359] Successful
I0110 13:36:48.359] message:I am plugin foo
I0110 13:36:48.359] has:plugin foo
I0110 13:36:48.428] Successful
I0110 13:36:48.429] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1610+3d9c6eb9e6e175", GitCommit:"3d9c6eb9e6e1759683d2c6cda726aad8a0e85604", GitTreeState:"clean", BuildDate:"2019-01-10T13:30:19Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0110 13:36:48.525] 
I0110 13:36:48.526] +++ Running case: test-cmd.run_impersonation_tests 
I0110 13:36:48.528] +++ working dir: /go/src/k8s.io/kubernetes
I0110 13:36:48.531] +++ command: run_impersonation_tests
I0110 13:36:48.539] +++ [0110 13:36:48] Testing impersonation
I0110 13:36:48.604] Successful
I0110 13:36:48.604] message:error: requesting groups or user-extra for  without impersonating a user
I0110 13:36:48.604] has:without impersonating a user
I0110 13:36:48.748] certificatesigningrequest.certificates.k8s.io/foo created
I0110 13:36:48.836] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0110 13:36:48.918] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0110 13:36:48.994] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0110 13:36:49.153] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 42 lines ...
W0110 13:36:49.641] I0110 13:36:49.632545   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.641] I0110 13:36:49.633889   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.641] I0110 13:36:49.633929   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.641] I0110 13:36:49.633936   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.641] I0110 13:36:49.632575   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.642] I0110 13:36:49.633947   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.642] W0110 13:36:49.634050   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.642] W0110 13:36:49.634113   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.642] W0110 13:36:49.634134   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.643] W0110 13:36:49.634140   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.643] W0110 13:36:49.634200   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.643] W0110 13:36:49.634246   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.643] I0110 13:36:49.634263   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.644] W0110 13:36:49.634268   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.644] I0110 13:36:49.634278   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.644] W0110 13:36:49.634264   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.644] W0110 13:36:49.634315   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.645] W0110 13:36:49.634351   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.645] W0110 13:36:49.634375   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.645] W0110 13:36:49.634393   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.646] W0110 13:36:49.634555   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.646] W0110 13:36:49.634659   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.646] I0110 13:36:49.634709   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.646] I0110 13:36:49.634722   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.647] W0110 13:36:49.634725   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.647] I0110 13:36:49.634746   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.647] I0110 13:36:49.634753   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.647] W0110 13:36:49.634761   53345 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0110 13:36:49.648] I0110 13:36:49.634776   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.648] I0110 13:36:49.634783   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.648] I0110 13:36:49.634804   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.648] I0110 13:36:49.634810   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.648] I0110 13:36:49.634830   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.649] I0110 13:36:49.634835   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 54 lines ...
W0110 13:36:49.660] I0110 13:36:49.635960   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.660] I0110 13:36:49.636094   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.660] I0110 13:36:49.636105   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.660] I0110 13:36:49.636110   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.660] I0110 13:36:49.636117   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.661] I0110 13:36:49.636128   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.661] E0110 13:36:49.636127   53345 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W0110 13:36:49.661] I0110 13:36:49.636137   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.661] I0110 13:36:49.636163   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.661] I0110 13:36:49.636168   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.662] I0110 13:36:49.636169   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.662] I0110 13:36:49.636182   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0110 13:36:49.662] I0110 13:36:49.636215   53345 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 237 lines ...
I0110 13:49:31.034] ok  	k8s.io/kubernetes/test/integration/replicationcontroller	56.542s
I0110 13:49:31.034] [restful] 2019/01/10 13:40:57 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:34221/swaggerapi
I0110 13:49:31.034] [restful] 2019/01/10 13:40:57 log.go:33: [restful/swagger] https://127.0.0.1:34221/swaggerui/ is mapped to folder /swagger-ui/
I0110 13:49:31.035] [restful] 2019/01/10 13:40:59 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:34221/swaggerapi
I0110 13:49:31.035] [restful] 2019/01/10 13:40:59 log.go:33: [restful/swagger] https://127.0.0.1:34221/swaggerui/ is mapped to folder /swagger-ui/
I0110 13:49:31.035] ok  	k8s.io/kubernetes/test/integration/scale	12.800s
I0110 13:49:31.035] FAIL	k8s.io/kubernetes/test/integration/scheduler	511.285s
I0110 13:49:31.035] ok  	k8s.io/kubernetes/test/integration/scheduler_perf	1.104s
I0110 13:49:31.035] ok  	k8s.io/kubernetes/test/integration/secrets	4.664s
I0110 13:49:31.035] ok  	k8s.io/kubernetes/test/integration/serviceaccount	41.793s
I0110 13:49:31.035] [restful] 2019/01/10 13:41:58 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:46257/swaggerapi
I0110 13:49:31.035] [restful] 2019/01/10 13:41:58 log.go:33: [restful/swagger] https://127.0.0.1:46257/swaggerui/ is mapped to folder /swagger-ui/
I0110 13:49:31.036] [restful] 2019/01/10 13:42:01 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:46257/swaggerapi
... skipping 7 lines ...
I0110 13:49:31.037] [restful] 2019/01/10 13:42:40 log.go:33: [restful/swagger] https://127.0.0.1:43743/swaggerui/ is mapped to folder /swagger-ui/
I0110 13:49:31.037] ok  	k8s.io/kubernetes/test/integration/tls	13.655s
I0110 13:49:31.037] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.254s
I0110 13:49:31.037] ok  	k8s.io/kubernetes/test/integration/volume	92.812s
I0110 13:49:31.037] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	143.311s
I0110 13:49:44.480] +++ [0110 13:49:44] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190110-133659.xml
I0110 13:49:44.483] Makefile:184: recipe for target 'test' failed
I0110 13:49:44.492] +++ [0110 13:49:44] Cleaning up etcd
W0110 13:49:44.593] make[1]: *** [test] Error 1
W0110 13:49:44.593] !!! [0110 13:49:44] Call tree:
W0110 13:49:44.593] !!! [0110 13:49:44]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0110 13:49:44.723] +++ [0110 13:49:44] Integration test cleanup complete
I0110 13:49:44.723] Makefile:203: recipe for target 'test-integration' failed
W0110 13:49:44.823] make: *** [test-integration] Error 1
W0110 13:49:47.034] Traceback (most recent call last):
W0110 13:49:47.034]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0110 13:49:47.034]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0110 13:49:47.034]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0110 13:49:47.034]     check(*cmd)
W0110 13:49:47.034]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0110 13:49:47.034]     subprocess.check_call(cmd)
W0110 13:49:47.035]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0110 13:49:47.035]     raise CalledProcessError(retcode, cmd)
W0110 13:49:47.035] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0110 13:49:47.041] Command failed
I0110 13:49:47.041] process 504 exited with code 1 after 25.2m
E0110 13:49:47.042] FAIL: ci-kubernetes-integration-master
I0110 13:49:47.042] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0110 13:49:47.613] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0110 13:49:47.660] process 126384 exited with code 0 after 0.0m
I0110 13:49:47.660] Call:  gcloud config get-value account
I0110 13:49:47.969] process 126396 exited with code 0 after 0.0m
I0110 13:49:47.969] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0110 13:49:47.969] Upload result and artifacts...
I0110 13:49:47.970] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/7983
I0110 13:49:47.970] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7983/artifacts
W0110 13:49:48.978] CommandException: One or more URLs matched no objects.
E0110 13:49:49.091] Command failed
I0110 13:49:49.091] process 126408 exited with code 1 after 0.0m
W0110 13:49:49.092] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7983/artifacts not exist yet
I0110 13:49:49.092] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/7983/artifacts
I0110 13:49:53.079] process 126550 exited with code 0 after 0.1m
W0110 13:49:53.079] metadata path /workspace/_artifacts/metadata.json does not exist
W0110 13:49:53.079] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...