This job view page is being replaced by Spyglass soon. Check out the new job view.
PRliggitt: Make pod eviction trigger graceful deletion to match deletion via API
ResultFAILURE
Tests 1 failed / 606 succeeded
Started2019-01-11 21:52
Elapsed27m4s
Revision
Buildergke-prow-containerd-pool-99179761-1b0c
Refs master:08bee2cc
72730:7dfa4083
pod00dbe503-15eb-11e9-829f-0a580a6c0288
infra-commit21b56ef87
pod00dbe503-15eb-11e9-829f-0a580a6c0288
repok8s.io/kubernetes
repo-commit2396de75dbd2d9ff57965ecdea288d1b826502ad
repos{u'k8s.io/kubernetes': u'master:08bee2cc8453c50c6d632634e9ceffe05bf8d4ba,72730:7dfa408301791b8aadd6f529408c8a3f9e9b45c7'}

Test Failures


k8s.io/kubernetes/test/integration/auth TestNodeAuthorizer 2m14s

go test -v k8s.io/kubernetes/test/integration/auth -run TestNodeAuthorizer$
W0111 22:08:59.934128  117445 feature_gate.go:218] Setting GA feature gate CSIPersistentVolume=true. It will be removed in a future release.
I0111 22:08:59.934147  117445 feature_gate.go:226] feature gates: &{map[CSIPersistentVolume:true]}
I0111 22:08:59.934239  117445 feature_gate.go:226] feature gates: &{map[CSIPersistentVolume:true DynamicKubeletConfig:true]}
I0111 22:08:59.934318  117445 feature_gate.go:226] feature gates: &{map[DynamicKubeletConfig:true NodeLease:true CSIPersistentVolume:true]}
I0111 22:08:59.934396  117445 feature_gate.go:226] feature gates: &{map[CSIPersistentVolume:true DynamicKubeletConfig:true NodeLease:true CSINodeInfo:true]}
I0111 22:08:59.934630  117445 plugins.go:84] Registered admission plugin "NamespaceLifecycle"
I0111 22:08:59.934652  117445 plugins.go:84] Registered admission plugin "Initializers"
I0111 22:08:59.934658  117445 plugins.go:84] Registered admission plugin "ValidatingAdmissionWebhook"
I0111 22:08:59.934668  117445 plugins.go:84] Registered admission plugin "MutatingAdmissionWebhook"
I0111 22:08:59.934677  117445 plugins.go:84] Registered admission plugin "AlwaysAdmit"
I0111 22:08:59.934683  117445 plugins.go:84] Registered admission plugin "AlwaysPullImages"
I0111 22:08:59.934697  117445 plugins.go:84] Registered admission plugin "LimitPodHardAntiAffinityTopology"
I0111 22:08:59.934705  117445 plugins.go:84] Registered admission plugin "DefaultTolerationSeconds"
I0111 22:08:59.934718  117445 plugins.go:84] Registered admission plugin "AlwaysDeny"
I0111 22:08:59.934727  117445 plugins.go:84] Registered admission plugin "EventRateLimit"
I0111 22:08:59.934740  117445 plugins.go:84] Registered admission plugin "DenyEscalatingExec"
I0111 22:08:59.934745  117445 plugins.go:84] Registered admission plugin "DenyExecOnPrivileged"
I0111 22:08:59.934752  117445 plugins.go:84] Registered admission plugin "ExtendedResourceToleration"
I0111 22:08:59.934758  117445 plugins.go:84] Registered admission plugin "OwnerReferencesPermissionEnforcement"
I0111 22:08:59.934794  117445 plugins.go:84] Registered admission plugin "ImagePolicyWebhook"
I0111 22:08:59.934821  117445 plugins.go:84] Registered admission plugin "LimitRanger"
I0111 22:08:59.934833  117445 plugins.go:84] Registered admission plugin "NamespaceAutoProvision"
I0111 22:08:59.934841  117445 plugins.go:84] Registered admission plugin "NamespaceExists"
I0111 22:08:59.934847  117445 plugins.go:84] Registered admission plugin "NodeRestriction"
I0111 22:08:59.934866  117445 plugins.go:84] Registered admission plugin "PersistentVolumeLabel"
I0111 22:08:59.934877  117445 plugins.go:84] Registered admission plugin "PodNodeSelector"
I0111 22:08:59.934882  117445 plugins.go:84] Registered admission plugin "PodPreset"
I0111 22:08:59.934895  117445 plugins.go:84] Registered admission plugin "PodTolerationRestriction"
I0111 22:08:59.934911  117445 plugins.go:84] Registered admission plugin "ResourceQuota"
I0111 22:08:59.934925  117445 plugins.go:84] Registered admission plugin "PodSecurityPolicy"
I0111 22:08:59.934932  117445 plugins.go:84] Registered admission plugin "Priority"
I0111 22:08:59.934945  117445 plugins.go:84] Registered admission plugin "SecurityContextDeny"
I0111 22:08:59.935277  117445 plugins.go:84] Registered admission plugin "ServiceAccount"
I0111 22:08:59.935305  117445 plugins.go:84] Registered admission plugin "DefaultStorageClass"
I0111 22:08:59.935313  117445 plugins.go:84] Registered admission plugin "PersistentVolumeClaimResize"
I0111 22:08:59.935320  117445 plugins.go:84] Registered admission plugin "StorageObjectInUseProtection"
I0111 22:08:59.936307  117445 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 22:08:59.936580  117445 serving.go:311] Generated self-signed cert (/tmp/kubernetes-kube-apiserver527985522/apiserver.crt, /tmp/kubernetes-kube-apiserver527985522/apiserver.key)
I0111 22:08:59.936601  117445 server.go:562] external host was not specified, using 127.0.0.1
I0111 22:08:59.936789  117445 server.go:605] Initializing cache sizes based on 0MB limit
W0111 22:09:00.226365  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226405  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226416  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226716  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226755  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226791  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226874  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226891  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226901  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226917  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.226935  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.227725  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.227773  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.227812  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.227880  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.228176  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.228334  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
W0111 22:09:00.228648  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 22:09:00.228671  117445 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 22:09:00.228679  117445 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0111 22:09:00.228697  117445 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 22:09:00.229734  117445 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 22:09:00.229749  117445 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0111 22:09:00.231617  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.231633  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.231675  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.231759  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.232122  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.234044  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.234286  117445 reflector.go:169] Listing and watching *apiextensions.CustomResourceDefinition from storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions
W0111 22:09:00.263386  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 22:09:00.264664  117445 services.go:45] Setting service IP to "10.0.0.1" (read-write).
I0111 22:09:00.264682  117445 master.go:229] Using reconciler: lease
I0111 22:09:00.264745  117445 storage_factory.go:285] storing apiServerIPInfo in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.265051  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.265067  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.265104  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.265147  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.265519  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.265655  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.268080  117445 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.268207  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.268225  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.268286  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.268350  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.272307  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.272418  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.272928  117445 reflector.go:169] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0111 22:09:00.272923  117445 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.273238  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.273261  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.273426  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.273558  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.273943  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.274012  117445 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.274107  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.274129  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.274158  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.274179  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.274238  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.275228  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.275297  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.275545  117445 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.275626  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.275637  117445 reflector.go:169] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0111 22:09:00.275645  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.275678  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.275788  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.275998  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.276369  117445 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.276569  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.276591  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.276618  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.276769  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.276784  117445 reflector.go:169] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0111 22:09:00.276833  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.277026  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.277230  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.277277  117445 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.277349  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.277352  117445 reflector.go:169] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0111 22:09:00.277361  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.277540  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.277582  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.278782  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.278817  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.279186  117445 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.279307  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.279330  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.279381  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.279409  117445 reflector.go:169] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0111 22:09:00.279445  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.282742  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.282834  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.283049  117445 reflector.go:169] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0111 22:09:00.283251  117445 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.283390  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.283407  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.283490  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.283572  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.283844  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.284002  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.284190  117445 reflector.go:169] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0111 22:09:00.284274  117445 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.284494  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.284511  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.284558  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.284595  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.285044  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.285322  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.285427  117445 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.285565  117445 reflector.go:169] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0111 22:09:00.285576  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.285726  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.285755  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.285795  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.286014  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.286046  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.286429  117445 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.286555  117445 reflector.go:169] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0111 22:09:00.287073  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.287098  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.287317  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.287405  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.288875  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.288914  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.289646  117445 reflector.go:169] Listing and watching *core.Node from storage/cacher.go:/minions
I0111 22:09:00.291328  117445 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.291524  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.291587  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.291668  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.291753  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.293869  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.293934  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.295975  117445 reflector.go:169] Listing and watching *core.Pod from storage/cacher.go:/pods
I0111 22:09:00.296646  117445 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.297079  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.297098  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.297253  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.298218  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.330411  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.333203  117445 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.333342  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.333359  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.333413  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.333517  117445 reflector.go:169] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0111 22:09:00.333768  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.334140  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.334343  117445 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.334435  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.334445  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.334542  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.334650  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.334678  117445 reflector.go:169] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0111 22:09:00.334820  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.335038  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.335105  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.335112  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.335128  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.335193  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.335207  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.335337  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.335470  117445 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.335566  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.335574  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.335590  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.335634  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.335649  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.335769  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.336606  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.336756  117445 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0111 22:09:00.352776  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.460572  117445 storage_factory.go:285] storing auditsinks.auditregistration.k8s.io in auditregistration.k8s.io/v1alpha1, reading as auditregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.460695  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.460721  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.460775  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.460831  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.461125  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.461512  117445 master.go:416] Enabling API group "auditregistration.k8s.io".
I0111 22:09:00.461569  117445 master.go:416] Enabling API group "authentication.k8s.io".
I0111 22:09:00.461601  117445 master.go:416] Enabling API group "authorization.k8s.io".
I0111 22:09:00.461760  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.461794  117445 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.461864  117445 reflector.go:169] Listing and watching *auditregistration.AuditSink from storage/cacher.go:/auditsinks
I0111 22:09:00.461911  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.461925  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.461956  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.462014  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.462315  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.462676  117445 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.462767  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.462781  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.462822  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.462937  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.462981  117445 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:09:00.463209  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.463428  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.463537  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.463739  117445 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:09:00.463778  117445 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.463867  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.463880  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.463910  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.464099  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.464350  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.464547  117445 master.go:416] Enabling API group "autoscaling".
I0111 22:09:00.464632  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.464731  117445 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.464826  117445 reflector.go:169] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0111 22:09:00.464841  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.464854  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.464882  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.465461  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.465920  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.466042  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.466283  117445 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.466403  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.466432  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.466464  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.466549  117445 reflector.go:169] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0111 22:09:00.466685  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.466914  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.467421  117445 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.467523  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.467579  117445 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 22:09:00.467606  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.467621  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.467698  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.467750  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.467966  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.468045  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.468144  117445 master.go:416] Enabling API group "batch".
I0111 22:09:00.468191  117445 reflector.go:169] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0111 22:09:00.468309  117445 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.469008  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.469053  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.469093  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.469171  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.469549  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.469589  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.469800  117445 master.go:416] Enabling API group "certificates.k8s.io".
I0111 22:09:00.469921  117445 reflector.go:169] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0111 22:09:00.469990  117445 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.470086  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.470100  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.470140  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.470175  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.470409  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.470495  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.470793  117445 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.470884  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.470897  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.470922  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.470950  117445 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 22:09:00.471023  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.471264  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.471424  117445 master.go:416] Enabling API group "coordination.k8s.io".
I0111 22:09:00.471591  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.471692  117445 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.471810  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.471831  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.471905  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.471954  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.471968  117445 reflector.go:169] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0111 22:09:00.472201  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.472401  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.472522  117445 reflector.go:169] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0111 22:09:00.472548  117445 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.472638  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.472656  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.472691  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.472865  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.473137  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.473189  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.473706  117445 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.473757  117445 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:09:00.473808  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.473828  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.473854  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.473926  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.474388  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.474596  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.474753  117445 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:09:00.474785  117445 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.474892  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.474911  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.475083  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.475131  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.475724  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.475754  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.476052  117445 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.476127  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.476149  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.476189  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.476453  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.476744  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.476805  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.477009  117445 reflector.go:169] Listing and watching *extensions.Ingress from storage/cacher.go:/ingress
I0111 22:09:00.477063  117445 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0111 22:09:00.477075  117445 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.477174  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.477186  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.477214  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.477418  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.477816  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.477941  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.478296  117445 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:09:00.478349  117445 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.478444  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.478466  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.478541  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.478595  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.478845  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.479018  117445 master.go:416] Enabling API group "extensions".
I0111 22:09:00.479170  117445 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.479248  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.479260  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.479287  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.479367  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.479416  117445 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 22:09:00.479558  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.479957  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.479985  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.480134  117445 master.go:416] Enabling API group "networking.k8s.io".
I0111 22:09:00.480248  117445 reflector.go:169] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0111 22:09:00.480323  117445 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.480596  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.480612  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.480640  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.480695  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.480934  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.480965  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.481187  117445 reflector.go:169] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0111 22:09:00.481237  117445 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.481333  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.481347  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.481373  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.481419  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.481746  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.481916  117445 master.go:416] Enabling API group "policy".
I0111 22:09:00.481948  117445 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.482022  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.482033  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.482059  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.482126  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.482172  117445 reflector.go:169] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0111 22:09:00.482381  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.483240  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.483425  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.483626  117445 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.483703  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.483715  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.483749  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.483794  117445 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:09:00.483950  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.484412  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.484509  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.484631  117445 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.484698  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.484709  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.484723  117445 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:09:00.484733  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.484801  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.484988  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.485256  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.485349  117445 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.485424  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.485436  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.485467  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.485559  117445 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:09:00.485713  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.485927  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.486004  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.486127  117445 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.486187  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.486198  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.486224  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.486238  117445 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:09:00.486353  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.486615  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.486761  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.486886  117445 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.486957  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.486975  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.487000  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.487037  117445 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:09:00.487178  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.487402  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.487590  117445 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.487650  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.487659  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.487682  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.487764  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.487811  117445 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:09:00.487911  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.488096  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.488247  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.488540  117445 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.488625  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.488637  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.488662  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.488702  117445 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:09:00.488980  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.489259  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.489325  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.489570  117445 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.489703  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.489722  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.489750  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.489874  117445 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:09:00.490469  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.490897  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.490963  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.491240  117445 reflector.go:169] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0111 22:09:00.491490  117445 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.491756  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.491821  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.491867  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.492616  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.493674  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.493879  117445 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.493945  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.493965  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.493997  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.494046  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.494070  117445 reflector.go:169] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0111 22:09:00.494131  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.494360  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.494658  117445 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.494745  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.494759  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.494786  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.494867  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.494911  117445 reflector.go:169] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0111 22:09:00.495031  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.495294  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.495352  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.495464  117445 master.go:416] Enabling API group "rbac.authorization.k8s.io".
I0111 22:09:00.495603  117445 reflector.go:169] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0111 22:09:00.498033  117445 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.498346  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.498365  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.498409  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.498504  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.499402  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.499513  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.500040  117445 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1beta1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.500161  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.500220  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.500303  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.500403  117445 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 22:09:00.500651  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.502007  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.502099  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.502357  117445 master.go:416] Enabling API group "scheduling.k8s.io".
I0111 22:09:00.502390  117445 reflector.go:169] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0111 22:09:00.502439  117445 storage_factory.go:285] storing podpresets.settings.k8s.io in settings.k8s.io/v1alpha1, reading as settings.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.502602  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.502673  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.502755  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.503000  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.506043  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.506100  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.506254  117445 master.go:416] Enabling API group "settings.k8s.io".
I0111 22:09:00.506298  117445 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.506393  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.506406  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.506406  117445 reflector.go:169] Listing and watching *settings.PodPreset from storage/cacher.go:/podpresets
I0111 22:09:00.506828  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.506940  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.507308  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.507544  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.508217  117445 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:09:00.508704  117445 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.509090  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.509112  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.509243  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.509327  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.509577  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.509696  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.509777  117445 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.509874  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.509887  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.509889  117445 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 22:09:00.509910  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.509978  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.510181  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.510524  117445 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.510563  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.510606  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.510618  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.510666  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.510671  117445 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:09:00.510707  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.510894  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.511054  117445 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.511088  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.511131  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.511138  117445 reflector.go:169] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0111 22:09:00.511142  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.511403  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.511452  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.511987  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.512173  117445 master.go:416] Enabling API group "storage.k8s.io".
I0111 22:09:00.512373  117445 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.512747  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.512764  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.512777  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.512823  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.512890  117445 reflector.go:169] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0111 22:09:00.513048  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.513235  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.513589  117445 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.513671  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.513684  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.513729  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.513791  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.513833  117445 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:09:00.514003  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.517843  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.518076  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.518392  117445 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:09:00.518443  117445 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.518948  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.518976  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.519017  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.519079  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.519966  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.520062  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.520460  117445 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.520602  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.520616  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.520647  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.520692  117445 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:09:00.520901  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.522001  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.522230  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.522364  117445 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.522386  117445 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:09:00.522465  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.522499  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.522545  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.522599  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.522942  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.523358  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.523544  117445 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:09:00.523585  117445 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.523687  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.523716  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.523758  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.523830  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.526092  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.526420  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.526603  117445 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:09:00.526589  117445 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.526852  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.526879  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.526919  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.526996  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.527314  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.527436  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.527949  117445 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.528000  117445 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:09:00.528078  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.528092  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.528122  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.528186  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.528430  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.528510  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.528805  117445 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:09:00.528828  117445 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.528931  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.528946  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.528981  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.529040  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.531342  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.531736  117445 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.531841  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.531874  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.531906  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.532009  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.532063  117445 reflector.go:169] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0111 22:09:00.532200  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.533622  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.533663  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.534221  117445 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.534246  117445 reflector.go:169] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0111 22:09:00.534322  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.534426  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.534460  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.534573  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.534803  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.535141  117445 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.535231  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.535254  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.535297  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.535351  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.535367  117445 reflector.go:169] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0111 22:09:00.535425  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.535724  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.536044  117445 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.536124  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.536137  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.536163  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.536211  117445 reflector.go:169] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0111 22:09:00.536219  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.536272  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.538125  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.538317  117445 master.go:416] Enabling API group "apps".
I0111 22:09:00.538338  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.538388  117445 storage_factory.go:285] storing initializerconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1alpha1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.538588  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.538663  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.538701  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.538749  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.538792  117445 reflector.go:169] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0111 22:09:00.539302  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.539791  117445 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.539832  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.539883  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.539903  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.539945  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.539996  117445 reflector.go:169] Listing and watching *admissionregistration.InitializerConfiguration from storage/cacher.go:/initializerconfigurations
I0111 22:09:00.539999  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.540596  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.540690  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.540775  117445 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.540870  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.540883  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.540895  117445 reflector.go:169] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0111 22:09:00.540915  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.540951  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.541201  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.541457  117445 master.go:416] Enabling API group "admissionregistration.k8s.io".
I0111 22:09:00.541541  117445 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"184f2d7a-8ea2-4223-9a16-c263ee569588/registry", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:""}, Quorum:false, Paging:true, Codec:runtime.Codec(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:0}
I0111 22:09:00.541635  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.541798  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:00.541820  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:00.541854  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:00.541880  117445 reflector.go:169] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0111 22:09:00.541903  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.542254  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:00.542405  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:00.542442  117445 master.go:416] Enabling API group "events.k8s.io".
I0111 22:09:01.225845  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:01.225884  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:01.225942  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:01.225997  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:01.226391  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:01.226541  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
[restful] 2019/01/11 22:09:01 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:42917/swaggerapi
[restful] 2019/01/11 22:09:01 log.go:33: [restful/swagger] https://127.0.0.1:42917/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2019/01/11 22:09:03 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:42917/swaggerapi
[restful] 2019/01/11 22:09:03 log.go:33: [restful/swagger] https://127.0.0.1:42917/swaggerui/ is mapped to folder /swagger-ui/
I0111 22:09:03.920184  117445 plugins.go:158] Loaded 7 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0111 22:09:03.920223  117445 plugins.go:161] Loaded 5 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
W0111 22:09:03.922386  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 22:09:03.922580  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:03.922599  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:03.922664  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:03.922865  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:03.923657  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:03.923933  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:03.925127  117445 clientconn.go:551] parsed scheme: ""
I0111 22:09:03.926568  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:09:03.926755  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:09:03.925173  117445 reflector.go:169] Listing and watching *apiregistration.APIService from storage/cacher.go:/apiregistration.k8s.io/apiservices
I0111 22:09:03.927698  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:03.928126  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:09:03.928501  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:09:03.929323  117445 reflector.go:169] Listing and watching *apiregistration.APIService from storage/cacher.go:/apiregistration.k8s.io/apiservices
W0111 22:09:03.932923  117445 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0111 22:09:10.038186  117445 secure_serving.go:116] Serving securely on 127.0.0.1:42917
I0111 22:09:10.038273  117445 apiservice_controller.go:90] Starting APIServiceRegistrationController
I0111 22:09:10.038291  117445 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0111 22:09:10.038287  117445 controller.go:84] Starting OpenAPI AggregationController
I0111 22:09:10.038345  117445 autoregister_controller.go:136] Starting autoregister controller
I0111 22:09:10.038364  117445 cache.go:32] Waiting for caches to sync for autoregister controller
I0111 22:09:10.038369  117445 available_controller.go:316] Starting AvailableConditionController
I0111 22:09:10.038376  117445 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0111 22:09:10.038403  117445 crd_finalizer.go:242] Starting CRDFinalizer
I0111 22:09:10.038454  117445 crdregistration_controller.go:112] Starting crd-autoregister controller
I0111 22:09:10.038466  117445 controller_utils.go:1021] Waiting for caches to sync for crd-autoregister controller
I0111 22:09:10.038707  117445 customresource_discovery_controller.go:203] Starting DiscoveryController
I0111 22:09:10.038732  117445 naming_controller.go:284] Starting NamingConditionController
I0111 22:09:10.039004  117445 reflector.go:131] Starting reflector *apiextensions.CustomResourceDefinition (5m0s) from k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117
I0111 22:09:10.039022  117445 reflector.go:169] Listing and watching *apiextensions.CustomResourceDefinition from k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117
I0111 22:09:10.039437  117445 reflector.go:131] Starting reflector *v1beta1.ValidatingWebhookConfiguration (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.039457  117445 reflector.go:169] Listing and watching *v1beta1.ValidatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.039685  117445 establishing_controller.go:73] Starting EstablishingController
I0111 22:09:10.040157  117445 reflector.go:131] Starting reflector *v1.ClusterRole (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.040178  117445 reflector.go:169] Listing and watching *v1.ClusterRole from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.040783  117445 reflector.go:131] Starting reflector *v1.Endpoints (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.040801  117445 reflector.go:169] Listing and watching *v1.Endpoints from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.041335  117445 reflector.go:131] Starting reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.041352  117445 reflector.go:169] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.041789  117445 reflector.go:131] Starting reflector *v1.Role (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.041801  117445 reflector.go:169] Listing and watching *v1.Role from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.049237  117445 reflector.go:131] Starting reflector *v1.Node (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.049279  117445 reflector.go:169] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.049579  117445 reflector.go:131] Starting reflector *v1.PersistentVolume (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.049598  117445 reflector.go:169] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.051187  117445 reflector.go:131] Starting reflector *v1.StorageClass (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.051206  117445 reflector.go:169] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.051669  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/roles?limit=500&resourceVersion=0: (712.807µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.051722  117445 reflector.go:131] Starting reflector *v1beta1.MutatingWebhookConfiguration (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.051735  117445 reflector.go:169] Listing and watching *v1beta1.MutatingWebhookConfiguration from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052296  117445 reflector.go:131] Starting reflector *v1.ResourceQuota (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052310  117445 reflector.go:169] Listing and watching *v1.ResourceQuota from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052360  117445 wrap.go:47] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (481.664µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.052772  117445 reflector.go:131] Starting reflector *v1.Pod (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052788  117445 reflector.go:169] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052829  117445 reflector.go:131] Starting reflector *v1.RoleBinding (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052841  117445 reflector.go:169] Listing and watching *v1.RoleBinding from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052958  117445 reflector.go:131] Starting reflector *v1.LimitRange (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052974  117445 reflector.go:169] Listing and watching *v1.LimitRange from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.053367  117445 reflector.go:131] Starting reflector *v1beta1.VolumeAttachment (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.053382  117445 reflector.go:169] Listing and watching *v1beta1.VolumeAttachment from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.053432  117445 reflector.go:131] Starting reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.053442  117445 reflector.go:169] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.052507  117445 reflector.go:131] Starting reflector *v1.Namespace (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.053626  117445 reflector.go:169] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.053693  117445 wrap.go:47] GET /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?limit=500&resourceVersion=0: (484.72µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.054279  117445 wrap.go:47] GET /api/v1/resourcequotas?limit=500&resourceVersion=0: (476.839µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.054332  117445 wrap.go:47] GET /apis/storage.k8s.io/v1beta1/volumeattachments?limit=500&resourceVersion=0: (461.09µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.054751  117445 reflector.go:131] Starting reflector *v1beta1.PriorityClass (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.054765  117445 reflector.go:169] Listing and watching *v1beta1.PriorityClass from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.054952  117445 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=5741 labels= fields= timeout=6m57s
I0111 22:09:10.055101  117445 reflector.go:131] Starting reflector *v1.ClusterRoleBinding (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.055111  117445 reflector.go:169] Listing and watching *v1.ClusterRoleBinding from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.055422  117445 reflector.go:131] Starting reflector *v1.Service (10m0s) from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.055432  117445 reflector.go:169] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:132
I0111 22:09:10.055750  117445 wrap.go:47] GET /api/v1/pods?limit=500&resourceVersion=0: (608.321µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.056219  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/rolebindings?limit=500&resourceVersion=0: (378.746µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.056580  117445 get.go:251] Starting watch for /apis/rbac.authorization.k8s.io/v1/roles, rv=5740 labels= fields= timeout=8m39s
I0111 22:09:10.057271  117445 wrap.go:47] GET /api/v1/limitranges?limit=500&resourceVersion=0: (431.17µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.058832  117445 wrap.go:47] GET /api/v1/serviceaccounts?limit=500&resourceVersion=0: (540.76µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.059244  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles?limit=500&resourceVersion=0: (397.488µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.059353  117445 wrap.go:47] GET /api/v1/namespaces?limit=500&resourceVersion=0: (462.817µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.059689  117445 wrap.go:47] GET /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations?limit=500&resourceVersion=0: (412.773µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.059850  117445 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses?limit=500&resourceVersion=0: (456.21µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.060300  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0: (415.477µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.060365  117445 wrap.go:47] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (432.528µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.061692  117445 wrap.go:47] GET /api/v1/services?limit=500&resourceVersion=0: (1.350027ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.062055  117445 reflector.go:131] Starting reflector *apiregistration.APIService (30s) from k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117
I0111 22:09:10.062077  117445 reflector.go:169] Listing and watching *apiregistration.APIService from k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117
I0111 22:09:10.063000  117445 wrap.go:47] GET /apis/apiregistration.k8s.io/v1/apiservices?limit=500&resourceVersion=0: (555.122µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.063873  117445 get.go:251] Starting watch for /apis/scheduling.k8s.io/v1beta1/priorityclasses, rv=5741 labels= fields= timeout=6m24s
I0111 22:09:10.064213  117445 get.go:251] Starting watch for /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations, rv=5741 labels= fields= timeout=8m12s
I0111 22:09:10.064565  117445 get.go:251] Starting watch for /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations, rv=5741 labels= fields= timeout=6m9s
I0111 22:09:10.065323  117445 get.go:251] Starting watch for /api/v1/services, rv=5740 labels= fields= timeout=8m51s
I0111 22:09:10.065962  117445 get.go:251] Starting watch for /api/v1/limitranges, rv=5711 labels= fields= timeout=6m12s
I0111 22:09:10.066390  117445 get.go:251] Starting watch for /apis/rbac.authorization.k8s.io/v1/clusterrolebindings, rv=5740 labels= fields= timeout=5m20s
I0111 22:09:10.068902  117445 get.go:251] Starting watch for /api/v1/pods, rv=5715 labels= fields= timeout=6m32s
I0111 22:09:10.068943  117445 get.go:251] Starting watch for /apis/rbac.authorization.k8s.io/v1/clusterroles, rv=5740 labels= fields= timeout=8m26s
I0111 22:09:10.069272  117445 get.go:251] Starting watch for /apis/rbac.authorization.k8s.io/v1/rolebindings, rv=5741 labels= fields= timeout=7m20s
I0111 22:09:10.069282  117445 get.go:251] Starting watch for /api/v1/serviceaccounts, rv=5739 labels= fields= timeout=9m36s
I0111 22:09:10.069715  117445 get.go:251] Starting watch for /api/v1/namespaces, rv=5711 labels= fields= timeout=5m30s
I0111 22:09:10.070078  117445 get.go:251] Starting watch for /api/v1/resourcequotas, rv=5711 labels= fields= timeout=9m59s
I0111 22:09:10.070421  117445 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/volumeattachments, rv=5741 labels= fields= timeout=5m23s
I0111 22:09:10.070576  117445 get.go:251] Starting watch for /apis/apiregistration.k8s.io/v1/apiservices, rv=5839 labels= fields= timeout=8m48s
I0111 22:09:10.070610  117445 wrap.go:47] GET /api/v1/nodes?limit=500&resourceVersion=0: (3.330434ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.070788  117445 wrap.go:47] GET /api/v1/secrets?limit=500&resourceVersion=0: (458.404µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.070900  117445 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=5711 labels= fields= timeout=5m22s
I0111 22:09:10.071353  117445 get.go:251] Starting watch for /api/v1/nodes, rv=5712 labels= fields= timeout=6m49s
I0111 22:09:10.071401  117445 wrap.go:47] GET /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions?limit=500&resourceVersion=0: (3.152762ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.071434  117445 get.go:251] Starting watch for /api/v1/secrets, rv=5711 labels= fields= timeout=6m51s
I0111 22:09:10.071742  117445 wrap.go:47] GET /api/v1/endpoints?limit=500&resourceVersion=0: (2.031331ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.071957  117445 wrap.go:47] GET /api/v1/services: (3.544426ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.072120  117445 get.go:251] Starting watch for /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions, rv=5711 labels= fields= timeout=7m25s
I0111 22:09:10.072463  117445 get.go:251] Starting watch for /api/v1/endpoints, rv=5712 labels= fields= timeout=7m5s
I0111 22:09:10.078189  117445 wrap.go:47] GET /api/v1/services: (1.203506ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.083461  117445 wrap.go:47] GET /api/v1/namespaces/default: (3.312723ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.087191  117445 wrap.go:47] POST /api/v1/namespaces: (3.091569ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.090089  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (2.386315ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.096258  117445 wrap.go:47] GET /api/v1/namespaces/default/resourcequotas: (1.424874ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.098363  117445 wrap.go:47] POST /api/v1/namespaces/default/services: (7.460549ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.102350  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.07962ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:09:10.103568  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:09:10.104629  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (666.388µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:09:10.104935  117445 controller.go:155] Unable to perform initial Kubernetes service initialization: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:09:10.107296  117445 wrap.go:47] GET /api/v1/services: (1.217978ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.107755  117445 wrap.go:47] GET /api/v1/services: (1.497429ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.108494  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (3.189604ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.108810  117445 wrap.go:47] GET /api/v1/namespaces/default: (2.897132ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.112790  117445 wrap.go:47] POST /api/v1/namespaces: (3.085291ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.112904  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (3.152843ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.114815  117445 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.394508ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.117296  117445 wrap.go:47] POST /api/v1/namespaces: (1.921431ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.117606  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.433752ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:09:10.118739  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:09:10.118990  117445 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.126908ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.119574  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (622.956µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:09:10.120440  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:09:10.122253  117445 wrap.go:47] POST /api/v1/namespaces: (2.640498ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.138519  117445 shared_informer.go:123] caches populated
I0111 22:09:10.138555  117445 cache.go:39] Caches are synced for AvailableConditionController controller
I0111 22:09:10.138540  117445 shared_informer.go:123] caches populated
I0111 22:09:10.138579  117445 shared_informer.go:123] caches populated
I0111 22:09:10.138593  117445 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0111 22:09:10.138629  117445 shared_informer.go:123] caches populated
I0111 22:09:10.138631  117445 shared_informer.go:123] caches populated
I0111 22:09:10.138636  117445 cache.go:39] Caches are synced for autoregister controller
I0111 22:09:10.138640  117445 controller_utils.go:1028] Caches are synced for crd-autoregister controller
I0111 22:09:10.139127  117445 shared_informer.go:123] caches populated
I0111 22:09:10.139149  117445 shared_informer.go:123] caches populated
I0111 22:09:10.139671  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.139697  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.139705  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.139753  117445 healthz.go:170] healthz check autoregister-completion failed: missing APIService: [v1. v1.apps v1.authentication.k8s.io v1.authorization.k8s.io v1.autoscaling v1.batch v1.coordination.k8s.io v1.networking.k8s.io v1.rbac.authorization.k8s.io v1.storage.k8s.io v1alpha1.admissionregistration.k8s.io v1alpha1.auditregistration.k8s.io v1alpha1.rbac.authorization.k8s.io v1alpha1.scheduling.k8s.io v1alpha1.settings.k8s.io v1alpha1.storage.k8s.io v1beta1.admissionregistration.k8s.io v1beta1.apiextensions.k8s.io v1beta1.apps v1beta1.authentication.k8s.io v1beta1.authorization.k8s.io v1beta1.batch v1beta1.certificates.k8s.io v1beta1.coordination.k8s.io v1beta1.events.k8s.io v1beta1.extensions v1beta1.policy v1beta1.rbac.authorization.k8s.io v1beta1.scheduling.k8s.io v1beta1.storage.k8s.io v1beta2.apps v2alpha1.batch v2beta1.autoscaling v2beta2.autoscaling]
I0111 22:09:10.139948  117445 wrap.go:47] GET /healthz: (2.187152ms) 500
goroutine 47749 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02f09e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02f09e2a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc02f08c920, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc000b0fbc0, 0xc019e79c00, 0x344, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc01f958f80, 0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0000)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc000b0fbc0, 0xc02f0e0000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012834960, 0xc047d55b60, 0x69bd1e0, 0xc000b0fbc0, 0xc02f0e0000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[-]autoregister-completion failed: reason withheld\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.140197  117445 shared_informer.go:123] caches populated
I0111 22:09:10.144424  117445 cacher.go:598] cacher (*apiregistration.APIService): 1 objects queued in incoming channel.
I0111 22:09:10.144589  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.288915ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.144747  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.907015ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.144855  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (4.33726ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.145062  117445 apiservice_controller.go:141] Adding v1alpha1.admissionregistration.k8s.io
I0111 22:09:10.145205  117445 apiservice_controller.go:141] Adding v1.apps
I0111 22:09:10.145273  117445 apiservice_controller.go:141] Adding v1beta1.admissionregistration.k8s.io
I0111 22:09:10.145095  117445 available_controller.go:367] Adding v1alpha1.admissionregistration.k8s.io
I0111 22:09:10.145444  117445 available_controller.go:367] Adding v1.apps
I0111 22:09:10.145608  117445 available_controller.go:367] Adding v1beta1.admissionregistration.k8s.io
I0111 22:09:10.147319  117445 cacher.go:598] cacher (*apiregistration.APIService): 1 objects queued in incoming channel.
I0111 22:09:10.148078  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (1.952443ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.148424  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (7.444181ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.148563  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.130245ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.149326  117445 apiservice_controller.go:141] Adding v1beta1.apiextensions.k8s.io
I0111 22:09:10.149541  117445 apiservice_controller.go:141] Adding v1beta2.apps
I0111 22:09:10.150174  117445 available_controller.go:367] Adding v1beta1.apiextensions.k8s.io
I0111 22:09:10.150195  117445 available_controller.go:367] Adding v1beta2.apps
I0111 22:09:10.150292  117445 apiservice_controller.go:141] Adding v1.
I0111 22:09:10.150557  117445 available_controller.go:367] Adding v1.
I0111 22:09:10.151020  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (4.394149ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.151103  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (4.497848ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.151343  117445 available_controller.go:367] Adding v1beta1.apps
I0111 22:09:10.151363  117445 available_controller.go:367] Adding v1alpha1.auditregistration.k8s.io
I0111 22:09:10.151411  117445 apiservice_controller.go:141] Adding v1beta1.apps
I0111 22:09:10.151419  117445 apiservice_controller.go:141] Adding v1alpha1.auditregistration.k8s.io
I0111 22:09:10.152561  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.254865ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.152752  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.940038ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.152806  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.390629ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.153607  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (1.968569ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.155152  117445 available_controller.go:367] Adding v1.authorization.k8s.io
I0111 22:09:10.155229  117445 apiservice_controller.go:141] Adding v1.authorization.k8s.io
I0111 22:09:10.155244  117445 apiservice_controller.go:141] Adding v1beta1.authentication.k8s.io
I0111 22:09:10.155252  117445 apiservice_controller.go:141] Adding v1.authentication.k8s.io
I0111 22:09:10.155277  117445 available_controller.go:367] Adding v1beta1.authentication.k8s.io
I0111 22:09:10.155284  117445 available_controller.go:367] Adding v1.authentication.k8s.io
I0111 22:09:10.155381  117445 available_controller.go:367] Adding v1beta1.authorization.k8s.io
I0111 22:09:10.155325  117445 apiservice_controller.go:141] Adding v1beta1.authorization.k8s.io
I0111 22:09:10.158590  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.193601ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.158700  117445 cacher.go:598] cacher (*apiregistration.APIService): 2 objects queued in incoming channel.
I0111 22:09:10.158720  117445 cacher.go:598] cacher (*apiregistration.APIService): 3 objects queued in incoming channel.
I0111 22:09:10.158615  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (4.159447ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.159072  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.835801ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.159296  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (3.139535ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.159100  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (3.097256ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.160948  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (1.216908ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.161081  117445 cacher.go:598] cacher (*apiregistration.APIService): 2 objects queued in incoming channel.
I0111 22:09:10.161211  117445 apiservice_controller.go:141] Adding v1beta1.batch
I0111 22:09:10.161235  117445 apiservice_controller.go:141] Adding v1.autoscaling
I0111 22:09:10.161247  117445 available_controller.go:367] Adding v1beta1.batch
I0111 22:09:10.161267  117445 available_controller.go:367] Adding v1.autoscaling
I0111 22:09:10.162363  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.661783ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.162702  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.520479ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.162801  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.784009ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.162841  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.739491ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.163090  117445 apiservice_controller.go:141] Adding v2beta1.autoscaling
I0111 22:09:10.163104  117445 apiservice_controller.go:141] Adding v2beta2.autoscaling
I0111 22:09:10.163114  117445 available_controller.go:367] Adding v2beta1.autoscaling
I0111 22:09:10.163164  117445 available_controller.go:367] Adding v2beta2.autoscaling
I0111 22:09:10.163192  117445 available_controller.go:367] Adding v1.batch
I0111 22:09:10.163202  117445 available_controller.go:367] Adding v1beta1.certificates.k8s.io
I0111 22:09:10.163229  117445 apiservice_controller.go:141] Adding v1.batch
I0111 22:09:10.163236  117445 apiservice_controller.go:141] Adding v1beta1.certificates.k8s.io
I0111 22:09:10.163668  117445 apiservice_controller.go:141] Adding v2alpha1.batch
I0111 22:09:10.163682  117445 apiservice_controller.go:141] Adding v1.coordination.k8s.io
I0111 22:09:10.163692  117445 available_controller.go:367] Adding v2alpha1.batch
I0111 22:09:10.163731  117445 apiservice_controller.go:141] Adding v1beta1.events.k8s.io
I0111 22:09:10.163747  117445 available_controller.go:367] Adding v1.coordination.k8s.io
I0111 22:09:10.163755  117445 available_controller.go:367] Adding v1beta1.events.k8s.io
I0111 22:09:10.163788  117445 apiservice_controller.go:141] Adding v1beta1.coordination.k8s.io
I0111 22:09:10.163877  117445 available_controller.go:367] Adding v1beta1.coordination.k8s.io
I0111 22:09:10.165000  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (1.671025ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.165553  117445 apiservice_controller.go:141] Adding v1.networking.k8s.io
I0111 22:09:10.165575  117445 available_controller.go:367] Adding v1.networking.k8s.io
I0111 22:09:10.167216  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (3.504498ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.167347  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (3.832988ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.167622  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (4.230293ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.167992  117445 apiservice_controller.go:141] Adding v1.rbac.authorization.k8s.io
I0111 22:09:10.168005  117445 apiservice_controller.go:141] Adding v1beta1.extensions
I0111 22:09:10.168013  117445 available_controller.go:367] Adding v1.rbac.authorization.k8s.io
I0111 22:09:10.168022  117445 available_controller.go:367] Adding v1beta1.extensions
I0111 22:09:10.168104  117445 apiservice_controller.go:141] Adding v1beta1.policy
I0111 22:09:10.168200  117445 available_controller.go:367] Adding v1beta1.policy
I0111 22:09:10.170457  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (4.768586ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.170679  117445 available_controller.go:367] Adding v1beta1.scheduling.k8s.io
I0111 22:09:10.170696  117445 apiservice_controller.go:141] Adding v1beta1.scheduling.k8s.io
I0111 22:09:10.170701  117445 available_controller.go:367] Adding v1alpha1.rbac.authorization.k8s.io
I0111 22:09:10.170706  117445 apiservice_controller.go:141] Adding v1alpha1.rbac.authorization.k8s.io
I0111 22:09:10.171208  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (3.527636ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.171858  117445 apiservice_controller.go:141] Adding v1beta1.rbac.authorization.k8s.io
I0111 22:09:10.171874  117445 apiservice_controller.go:141] Adding v1alpha1.scheduling.k8s.io
I0111 22:09:10.171886  117445 available_controller.go:367] Adding v1beta1.rbac.authorization.k8s.io
I0111 22:09:10.171926  117445 available_controller.go:367] Adding v1alpha1.scheduling.k8s.io
I0111 22:09:10.172199  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (3.9001ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.172444  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (4.231926ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.175115  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (3.081959ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.175433  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (2.363132ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.175687  117445 apiservice_controller.go:141] Adding v1.storage.k8s.io
I0111 22:09:10.175706  117445 apiservice_controller.go:141] Adding v1alpha1.settings.k8s.io
I0111 22:09:10.175717  117445 available_controller.go:367] Adding v1.storage.k8s.io
I0111 22:09:10.175861  117445 apiservice_controller.go:141] Adding v1alpha1.storage.k8s.io
I0111 22:09:10.176197  117445 available_controller.go:367] Adding v1alpha1.settings.k8s.io
I0111 22:09:10.176238  117445 available_controller.go:367] Adding v1alpha1.storage.k8s.io
I0111 22:09:10.178823  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (5.762507ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.179206  117445 apiservice_controller.go:141] Adding v1beta1.storage.k8s.io
I0111 22:09:10.179297  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (8.273316ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.179750  117445 available_controller.go:367] Adding v1beta1.storage.k8s.io
I0111 22:09:10.238821  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.238852  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.238861  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.239227  117445 wrap.go:47] GET /healthz: (1.57075ms) 500
goroutine 48015 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02be515e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02be515e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc046ec9e00, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc001594338, 0xc02cb79180, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc001594338, 0xc02ef5c500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc001594338, 0xc02ef5c400)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc001594338, 0xc02ef5c400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011ee76e0, 0xc047d55b60, 0x69bd1e0, 0xc001594338, 0xc02ef5c400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.338547  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.338580  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.338589  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.338785  117445 wrap.go:47] GET /healthz: (1.172143ms) 500
goroutine 48068 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc032d04850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc032d04850, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0384fb6e0, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0033a6290, 0xc02d879180, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0033a6290, 0xc035d92300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0033a6290, 0xc035d92200)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0033a6290, 0xc035d92200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0136b1b00, 0xc047d55b60, 0x69bd1e0, 0xc0033a6290, 0xc035d92200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.438544  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.438576  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.438585  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.438812  117445 wrap.go:47] GET /healthz: (1.212874ms) 500
goroutine 47979 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc03268f8f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc03268f8f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc02eeeec00, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc024ff3418, 0xc02e15a700, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc024ff3418, 0xc02beefa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc024ff3418, 0xc02beef900)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc024ff3418, 0xc02beef900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01162f440, 0xc047d55b60, 0x69bd1e0, 0xc024ff3418, 0xc02beef900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.538624  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.538660  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.538668  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.538894  117445 wrap.go:47] GET /healthz: (1.226395ms) 500
goroutine 48088 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02de7b9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02de7b9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc03683bc40, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0470fcd78, 0xc02f460380, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0470fcd78, 0xc0367e9900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0470fcd78, 0xc0367e9800)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0470fcd78, 0xc0367e9800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01011c420, 0xc047d55b60, 0x69bd1e0, 0xc0470fcd78, 0xc0367e9800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.638543  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.638572  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.638580  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.638762  117445 wrap.go:47] GET /healthz: (1.14209ms) 500
goroutine 47981 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc03268f9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc03268f9d0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc02eeeeec0, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc024ff3458, 0xc02f990380, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc024ff3458, 0xc02db12300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc024ff3458, 0xc02db12200)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc024ff3458, 0xc02db12200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01162f800, 0xc047d55b60, 0x69bd1e0, 0xc024ff3458, 0xc02db12200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.738639  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.738669  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.738678  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.738872  117445 wrap.go:47] GET /healthz: (1.199177ms) 500
goroutine 48090 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02de7bab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02de7bab0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc034ae2000, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0470fcdb8, 0xc033f30e00, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0100)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0470fcdb8, 0xc034ae0100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01011c840, 0xc047d55b60, 0x69bd1e0, 0xc0470fcdb8, 0xc034ae0100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.839163  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.839191  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.839200  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.839564  117445 wrap.go:47] GET /healthz: (1.565758ms) 500
goroutine 47984 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc03268fb20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc03268fb20, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc02eeef320, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc024ff3478, 0xc035611180, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc024ff3478, 0xc02db12900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc024ff3478, 0xc02db12800)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc024ff3478, 0xc02db12800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01162fc80, 0xc047d55b60, 0x69bd1e0, 0xc024ff3478, 0xc02db12800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:10.938560  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:10.938595  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:10.938605  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:10.938777  117445 wrap.go:47] GET /healthz: (1.151362ms) 500
goroutine 48103 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02be51810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02be51810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc03242e400, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0015943a8, 0xc02d4cb880, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0015943a8, 0xc02ef5cc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0015943a8, 0xc02ef5cb00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0015943a8, 0xc02ef5cb00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc011ee7c80, 0xc047d55b60, 0x69bd1e0, 0xc0015943a8, 0xc02ef5cb00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.038542  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.038574  117445 healthz.go:170] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0111 22:09:11.038583  117445 healthz.go:170] healthz check poststarthook/ca-registration failed: not finished
I0111 22:09:11.038753  117445 wrap.go:47] GET /healthz: (1.148091ms) 500
goroutine 48105 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02be518f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02be518f0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc03242e820, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0015943e8, 0xc04bd52380, 0x32f, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0015943e8, 0xc02ef5d500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0015943e8, 0xc02ef5d400)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0015943e8, 0xc02ef5d400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ffd60c0, 0xc047d55b60, 0x69bd1e0, 0xc0015943e8, 0xc02ef5d400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld\n[-]poststarthook/ca-registration failed: reason withheld\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.039947  117445 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.136649ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.040523  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.496898ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.040657  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.086534ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.041933  117445 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.368744ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.042055  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.024991ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.042114  117445 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I0111 22:09:11.042844  117445 wrap.go:47] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.831384ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.043081  117445 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (711.976µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.043313  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (721.388µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.044722  117445 wrap.go:47] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.148315ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.044988  117445 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I0111 22:09:11.044999  117445 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I0111 22:09:11.045041  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (807.783µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.046647  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.133667ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.046666  117445 wrap.go:47] GET /api/v1/namespaces/kube-system/resourcequotas: (2.211272ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.048049  117445 wrap.go:47] POST /api/v1/namespaces/kube-system/configmaps: (4.719271ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.048117  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (849.667µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.049551  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (878.958µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.050961  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (907.219µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.052253  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (822.254µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.054211  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.441293ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.054426  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0111 22:09:11.055354  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (703.443µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.057175  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.338323ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.057538  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0111 22:09:11.058632  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (824.138µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.060644  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.51149ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.060885  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0111 22:09:11.061849  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (778.679µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.063951  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.623747ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.064354  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I0111 22:09:11.065591  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.007811ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.067517  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.454026ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.067735  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I0111 22:09:11.068804  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (879.151µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.070994  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.722501ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.071386  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I0111 22:09:11.072636  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (856.631µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.074630  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.538698ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.074853  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0111 22:09:11.076031  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (969.562µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.078503  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.700105ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.078881  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0111 22:09:11.079917  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (847.301µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.083344  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.476683ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.083843  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0111 22:09:11.085102  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (957.644µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.087112  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.525796ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.087428  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0111 22:09:11.088602  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (884.072µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.091366  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.224793ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.091842  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node
I0111 22:09:11.092924  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (828.807µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.095039  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.600069ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.095258  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0111 22:09:11.096769  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.341339ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.099149  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.81123ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.099430  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0111 22:09:11.100823  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.114309ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.103697  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.365045ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.103990  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0111 22:09:11.105186  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (944.128µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.107303  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.626053ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.107626  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0111 22:09:11.108770  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (858.013µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.110782  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.57178ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.111069  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0111 22:09:11.113071  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (858.695µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.115292  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.63919ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.115733  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0111 22:09:11.117146  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.127763ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.119358  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.643169ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.119661  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 22:09:11.120894  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (994.097µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.123027  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.631947ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.123300  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0111 22:09:11.124541  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (899.704µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.126681  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.552931ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.127010  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0111 22:09:11.128053  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (828.295µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.130097  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.538046ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.130309  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0111 22:09:11.131520  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (948.328µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.133621  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567098ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.134309  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0111 22:09:11.135427  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aws-cloud-provider: (799.589µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.137225  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.33882ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.137612  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 22:09:11.137964  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.138147  117445 wrap.go:47] GET /healthz: (814.261µs) 500
goroutine 48596 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc03502e540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc03502e540, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0350b2b00, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc047dc6b78, 0xc037183500, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc047dc6b78, 0xc034f9dc00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc047dc6b78, 0xc034f9db00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc047dc6b78, 0xc034f9db00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e3d85a0, 0xc047d55b60, 0x69bd1e0, 0xc047dc6b78, 0xc034f9db00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.138994  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.04509ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.141127  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.577956ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.141356  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0111 22:09:11.142603  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (933.119µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.144600  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.476576ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.144866  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0111 22:09:11.146105  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.013454ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.148621  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.042521ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.148884  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0111 22:09:11.150205  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (1.020714ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.153002  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.24782ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.153328  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0111 22:09:11.154703  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.058518ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.156734  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.567731ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.156989  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 22:09:11.158133  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (903.464µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.160284  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.598905ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.160521  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 22:09:11.161636  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (847.644µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.163546  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.446758ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.163765  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 22:09:11.164921  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (905.092µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.166991  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.587225ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.167351  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 22:09:11.168599  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (920.35µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.171032  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.794787ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.171288  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 22:09:11.172455  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (974.108µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.174741  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.829634ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.175037  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 22:09:11.176249  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (959.739µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.178441  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.677355ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.178748  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 22:09:11.180001  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (920.165µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.183132  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.630126ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.183556  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 22:09:11.184884  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.006351ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.187009  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.628331ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.187308  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 22:09:11.188425  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (865.919µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.190578  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.689976ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.190920  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 22:09:11.192078  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (916.014µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.195418  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.739343ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.195732  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0111 22:09:11.197019  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.006831ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.199505  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.901399ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.199870  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 22:09:11.201288  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.168983ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.203576  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.659284ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.203872  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0111 22:09:11.204988  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (910.943µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.207174  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.611559ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.207436  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 22:09:11.208511  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (826.442µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.210516  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.559474ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.211640  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 22:09:11.213224  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.253099ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.215298  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.475973ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.215640  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 22:09:11.216850  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (955.067µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.219214  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.842878ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.219554  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 22:09:11.220674  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (812.537µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.222886  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.760165ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.223126  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 22:09:11.224336  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (954.33µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.226548  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.598573ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.226794  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0111 22:09:11.228461  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.453135ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.247326  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (18.368934ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.248105  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 22:09:11.253323  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.253544  117445 wrap.go:47] GET /healthz: (6.209217ms) 500
goroutine 48971 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0351f6230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0351f6230, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc03516f000, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0011ab8c8, 0xc033935180, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb500)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0011ab8c8, 0xc0462fb500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc004af3bc0, 0xc047d55b60, 0x69bd1e0, 0xc0011ab8c8, 0xc0462fb500)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.254044  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (5.719447ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.257067  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.379748ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.257314  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0111 22:09:11.258879  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (945.817µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.266255  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.914979ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.266709  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 22:09:11.268667  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.482807ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.272261  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.998456ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.272606  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 22:09:11.274052  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.145057ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.276788  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.947179ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.277155  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 22:09:11.278464  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.012821ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.281824  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.12934ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.282090  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 22:09:11.301661  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.792533ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.321922  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.989579ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.322320  117445 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 22:09:11.339052  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.339275  117445 wrap.go:47] GET /healthz: (1.495839ms) 500
goroutine 49107 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc046237dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc046237dc0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0351ff720, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc00000f160, 0xc046279180, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc00000f160, 0xc03521ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc00000f160, 0xc03521ad00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc00000f160, 0xc03521ad00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00afe1980, 0xc047d55b60, 0x69bd1e0, 0xc00000f160, 0xc03521ad00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.341524  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.560127ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.363615  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.511997ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.364031  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0111 22:09:11.381196  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.323343ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.405649  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.756617ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.406017  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0111 22:09:11.421026  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.234542ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.438835  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.439235  117445 wrap.go:47] GET /healthz: (1.516875ms) 500
goroutine 49126 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0351f7f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0351f7f80, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc02ed75ce0, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003f841b8, 0xc02ee06a80, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003f841b8, 0xc0352e2200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003f841b8, 0xc0352e2100)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003f841b8, 0xc0352e2100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00afa9ce0, 0xc047d55b60, 0x69bd1e0, 0xc003f841b8, 0xc0352e2100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.441843  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.013691ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.442086  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0111 22:09:11.461180  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.386314ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.485714  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.940569ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.486036  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0111 22:09:11.501494  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.56525ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.522407  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.496672ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.522778  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0111 22:09:11.540389  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.540588  117445 wrap.go:47] GET /healthz: (1.309858ms) 500
goroutine 48987 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc03400e8c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc03400e8c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc028244d80, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc047e42df8, 0xc02bfa0a80, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc047e42df8, 0xc02c097500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc047e42df8, 0xc02c097400)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc047e42df8, 0xc02c097400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ad47aa0, 0xc047d55b60, 0x69bd1e0, 0xc047e42df8, 0xc02c097400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.545774  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (5.528028ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.561934  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.118357ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.562274  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0111 22:09:11.581895  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.055737ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.601601  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.722058ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.602010  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0111 22:09:11.621248  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:aws-cloud-provider: (1.426958ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.638502  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.638699  117445 wrap.go:47] GET /healthz: (1.095549ms) 500
goroutine 49180 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0290e8770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0290e8770, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0353aea60, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0470fd4d0, 0xc033ace380, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0470fd4d0, 0xc03539b500)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0470fd4d0, 0xc03539b400)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0470fd4d0, 0xc03539b400)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b5b2e40, 0xc047d55b60, 0x69bd1e0, 0xc0470fd4d0, 0xc03539b400)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.641507  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.789099ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.641784  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider
I0111 22:09:11.668784  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (8.987022ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.681815  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.009186ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.682075  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0111 22:09:11.700961  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.101719ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.721854  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.985187ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.722102  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0111 22:09:11.738478  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.738725  117445 wrap.go:47] GET /healthz: (1.081276ms) 500
goroutine 49132 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0352f4460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0352f4460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc03536aae0, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003f84718, 0xc0284e8700, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003f84718, 0xc0352e3e00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003f84718, 0xc0352e3d00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003f84718, 0xc0352e3d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b506e40, 0xc047d55b60, 0x69bd1e0, 0xc003f84718, 0xc0352e3d00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.740947  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.184376ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.762256  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.298404ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.762582  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0111 22:09:11.781157  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.225029ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.802334  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.308423ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.802791  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0111 22:09:11.830177  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (10.399574ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.838630  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.838842  117445 wrap.go:47] GET /healthz: (1.146472ms) 500
goroutine 49348 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0352f4a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0352f4a10, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc03536baa0, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003f84a28, 0xc03919d880, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003f84a28, 0xc033c9c300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003f84a28, 0xc033c9c200)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003f84a28, 0xc033c9c200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b507380, 0xc047d55b60, 0x69bd1e0, 0xc003f84a28, 0xc033c9c200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.841836  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.120584ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.842132  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0111 22:09:11.861142  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.261291ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.882385  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.383699ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.882688  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0111 22:09:11.901074  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.213983ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.921807  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.935099ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.922127  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0111 22:09:11.939729  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:11.939939  117445 wrap.go:47] GET /healthz: (1.141822ms) 500
goroutine 49368 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0352f5730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0352f5730, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc039c29720, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003f84e80, 0xc0284e9880, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003f84e80, 0xc039cfe000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003f84e80, 0xc033c9df00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003f84e80, 0xc033c9df00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c13c5a0, 0xc047d55b60, 0x69bd1e0, 0xc003f84e80, 0xc033c9df00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.940805  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.088544ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.961822  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.979081ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:11.962065  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0111 22:09:11.981319  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.399855ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.001927  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.98114ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.002283  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0111 22:09:12.021227  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.371677ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.038634  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.038843  117445 wrap.go:47] GET /healthz: (1.216481ms) 500
goroutine 49397 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0352f5ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0352f5ea0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc039d4a9c0, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003f84f60, 0xc02d060a80, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003f84f60, 0xc039cffa00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003f84f60, 0xc039cff900)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003f84f60, 0xc039cff900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00c5654a0, 0xc047d55b60, 0x69bd1e0, 0xc003f84f60, 0xc039cff900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.041986  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.227207ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.042225  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0111 22:09:12.061153  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.229871ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.082275  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.062885ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.082677  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0111 22:09:12.101379  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.470398ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.122245  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119842ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.122577  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0111 22:09:12.138682  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.138860  117445 wrap.go:47] GET /healthz: (1.159856ms) 500
goroutine 49423 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02f62c380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02f62c380, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc02f74e140, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0470fd790, 0xc033acfc00, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0470fd790, 0xc039b73a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0470fd790, 0xc039b73900)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0470fd790, 0xc039b73900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00ce47620, 0xc047d55b60, 0x69bd1e0, 0xc0470fd790, 0xc039b73900)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.140887  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.105907ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.162059  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.150315ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.162387  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0111 22:09:12.181272  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.374588ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.201920  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.066716ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.202221  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0111 22:09:12.225721  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.496866ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.238456  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.238674  117445 wrap.go:47] GET /healthz: (1.042789ms) 500
goroutine 49481 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02f6f2d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02f6f2d90, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc02f7199e0, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003f850f8, 0xc02d061c00, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003f850f8, 0xc02f6f1d00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003f850f8, 0xc02f6f1c00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003f850f8, 0xc02f6f1c00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012bdc0c0, 0xc047d55b60, 0x69bd1e0, 0xc003f850f8, 0xc02f6f1c00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.241646  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.930467ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.241931  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0111 22:09:12.265158  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.286413ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.282716  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.840992ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.283025  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0111 22:09:12.300981  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.19543ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.321915  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.921793ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.322225  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0111 22:09:12.338747  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.338966  117445 wrap.go:47] GET /healthz: (1.300074ms) 500
goroutine 49532 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc02f62d420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc02f62d420, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc045fd6aa0, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0470fd9c8, 0xc039bdd880, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0470fd9c8, 0xc046058200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0470fd9c8, 0xc046058100)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0470fd9c8, 0xc046058100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc012e4e420, 0xc047d55b60, 0x69bd1e0, 0xc0470fd9c8, 0xc046058100)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.340913  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.175904ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.361795  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.867972ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.362086  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0111 22:09:12.381050  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.167351ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.402341  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.382539ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.402673  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0111 22:09:12.421262  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.316793ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.438742  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.438940  117445 wrap.go:47] GET /healthz: (1.327524ms) 500
goroutine 49562 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0460a4460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0460a4460, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc046092b20, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003f85568, 0xc043f62380, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003f85568, 0xc043f60700)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003f85568, 0xc043f60700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003f85568, 0xc043f60700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003f85568, 0xc043f60700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003f85568, 0xc043f60700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003f85568, 0xc043f60700)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003f85568, 0xc043f60700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003f85568, 0xc043f60700)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003f85568, 0xc043f60700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003f85568, 0xc043f60700)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003f85568, 0xc043f60700)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003f85568, 0xc043f60600)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003f85568, 0xc043f60600)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b539680, 0xc047d55b60, 0x69bd1e0, 0xc003f85568, 0xc043f60600)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.441880  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.179376ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.442155  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0111 22:09:12.461182  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.335859ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.483514  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.537325ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.487157  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0111 22:09:12.504720  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.455453ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.521749  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.900316ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.522137  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0111 22:09:12.538537  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.538745  117445 wrap.go:47] GET /healthz: (1.138597ms) 500
goroutine 49645 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0460a5490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0460a5490, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043fb3280, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003f85778, 0xc0391afc00, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003f85778, 0xc043fee900)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003f85778, 0xc043fee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003f85778, 0xc043fee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003f85778, 0xc043fee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003f85778, 0xc043fee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003f85778, 0xc043fee900)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003f85778, 0xc043fee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003f85778, 0xc043fee900)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003f85778, 0xc043fee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003f85778, 0xc043fee900)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003f85778, 0xc043fee900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003f85778, 0xc043fee800)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003f85778, 0xc043fee800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00945b2c0, 0xc047d55b60, 0x69bd1e0, 0xc003f85778, 0xc043fee800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.540963  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.24692ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.561812  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.93708ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.562154  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0111 22:09:12.581178  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.32126ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.602016  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.173541ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.602334  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0111 22:09:12.621426  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.406505ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.638678  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.638922  117445 wrap.go:47] GET /healthz: (1.29431ms) 500
goroutine 49714 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc045e72af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc045e72af0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc045e31920, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc001595e10, 0xc044106380, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc001595e10, 0xc045e5af00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc001595e10, 0xc045e5ae00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc001595e10, 0xc045e5ae00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b0400c0, 0xc047d55b60, 0x69bd1e0, 0xc001595e10, 0xc045e5ae00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.641871  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.1295ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.642192  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0111 22:09:12.661392  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.42352ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.683129  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.251832ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.683543  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0111 22:09:12.701694  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.769686ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.721965  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.039687ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.722390  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0111 22:09:12.738633  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.738910  117445 wrap.go:47] GET /healthz: (1.269618ms) 500
goroutine 49706 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0440c08c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0440c08c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc0440a1600, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0470fdbd8, 0xc043f63500, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8300)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8200)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0470fdbd8, 0xc0331e8200)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00b597f20, 0xc047d55b60, 0x69bd1e0, 0xc0470fdbd8, 0xc0331e8200)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.741061  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.273442ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.762263  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.220515ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.762583  117445 storage_rbac.go:215] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0111 22:09:12.781465  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.523189ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.783405  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.312447ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.800695  117445 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0111 22:09:12.802261  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.217769ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.802602  117445 storage_rbac.go:246] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0111 22:09:12.821333  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.469347ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.823192  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.235465ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.838724  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.838941  117445 wrap.go:47] GET /healthz: (1.249483ms) 500
goroutine 49608 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc033bfacb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc033bfacb0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc043fb0980, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc047dc7240, 0xc0333d8380, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc047dc7240, 0xc033b87b00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc047dc7240, 0xc033b87a00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc047dc7240, 0xc033b87a00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc00e5b75c0, 0xc047d55b60, 0x69bd1e0, 0xc047dc7240, 0xc033b87a00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.841592  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.896221ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.841851  117445 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 22:09:12.861624  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.598449ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.863780  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.515211ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.881968  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.059947ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.882268  117445 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 22:09:12.901347  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.391558ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.903679  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.524209ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.922114  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.239932ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.922457  117445 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 22:09:12.938715  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:12.939022  117445 wrap.go:47] GET /healthz: (1.366117ms) 500
goroutine 49859 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0333f50a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0333f50a0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc03691f120, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0033a7c38, 0xc03320ae00, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0033a7c38, 0xc03344be00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0033a7c38, 0xc03344bd00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0033a7c38, 0xc03344bd00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc01204d0e0, 0xc047d55b60, 0x69bd1e0, 0xc0033a7c38, 0xc03344bd00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.941256  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.513524ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.943195  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.280127ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.962100  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.245135ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.962547  117445 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 22:09:12.981128  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.305667ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:12.983470  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.607718ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.002265  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.348011ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.002715  117445 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 22:09:13.035583  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (5.560927ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.038364  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:13.038569  117445 wrap.go:47] GET /healthz: (1.047761ms) 500
goroutine 49913 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc0440c1810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc0440c1810, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc03693b860, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0470fdd68, 0xc04605b180, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0470fdd68, 0xc03f56c100)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0470fdd68, 0xc03f56c000)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0470fdd68, 0xc03f56c000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0153c0a80, 0xc047d55b60, 0x69bd1e0, 0xc0470fdd68, 0xc03f56c000)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.042662  117445 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.325402ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.045419  117445 wrap.go:47] GET /api/v1/namespaces/kube-public/resourcequotas: (1.209038ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.048570  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (5.380688ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.048968  117445 storage_rbac.go:246] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 22:09:13.061351  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.412776ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.063608  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.555036ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.080540  117445 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0111 22:09:13.082701  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.829476ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.082954  117445 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0111 22:09:13.101341  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.519599ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.103245  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.305419ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.121974  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.039448ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.122297  117445 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0111 22:09:13.138642  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:13.138887  117445 wrap.go:47] GET /healthz: (1.172446ms) 500
goroutine 50083 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc03712c7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc03712c7e0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc045e06780, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc0033a7e38, 0xc044107c00, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc0033a7e38, 0xc02db1a000)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc0033a7e38, 0xc0365cff00)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc0033a7e38, 0xc0365cff00)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc0161748a0, 0xc047d55b60, 0x69bd1e0, 0xc0033a7e38, 0xc0365cff00)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.141065  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.284866ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.142949  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.32508ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.162045  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.13243ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.162330  117445 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0111 22:09:13.181206  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.317854ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.183244  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.27405ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.202658  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.705037ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.203119  117445 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0111 22:09:13.221286  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.417504ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.223272  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.355748ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.238761  117445 healthz.go:170] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0111 22:09:13.238970  117445 wrap.go:47] GET /healthz: (1.240439ms) 500
goroutine 50119 [running]:
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc03712d6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc03712d6c0, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).WriteHeader(0xc028c13060, 0x1f4)
net/http.Error(0x7fc33300af58, 0xc003fca088, 0xc02db2c700, 0x305, 0x1f4)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/healthz.handleRootHealthz.func1(0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
net/http.HandlerFunc.ServeHTTP(0xc025a62640, 0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc0367bcdc0, 0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc047d7c3f0, 0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x494f066, 0xf, 0xc00bc47320, 0xc047d7c3f0, 0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
net/http.HandlerFunc.ServeHTTP(0xc047d56fc0, 0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1(0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
net/http.HandlerFunc.ServeHTTP(0xc047d8a780, 0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
net/http.HandlerFunc.ServeHTTP(0xc047d57000, 0x7fc33300af58, 0xc003fca088, 0xc028ca2900)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x7fc33300af58, 0xc003fca088, 0xc028ca2800)
net/http.HandlerFunc.ServeHTTP(0xc028e26eb0, 0x7fc33300af58, 0xc003fca088, 0xc028ca2800)
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1(0xc016753b60, 0xc047d55b60, 0x69bd1e0, 0xc003fca088, 0xc028ca2800)
created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP

logging error output: "[+]ping ok\n[+]log ok\n[+]etcd ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/bootstrap-controller ok\n[-]poststarthook/rbac/bootstrap-roles failed: reason withheld\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/ca-registration ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-openapi-controller ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\nhealthz check failed\n"
 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.241664  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.008092ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.242040  117445 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0111 22:09:13.261211  117445 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.313125ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.263573  117445 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.558201ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.284758  117445 wrap.go:47] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.520088ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.285496  117445 storage_rbac.go:276] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0111 22:09:13.338859  117445 wrap.go:47] GET /healthz: (1.151409ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.341446  117445 wrap.go:47] GET /healthz: (901.85µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.343987  117445 wrap.go:47] POST /api/v1/namespaces: (1.970549ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.346555  117445 wrap.go:47] GET /api/v1/namespaces/ns/resourcequotas: (1.178665ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.348070  117445 wrap.go:47] POST /api/v1/namespaces/ns/secrets: (3.560158ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.350192  117445 wrap.go:47] POST /api/v1/namespaces/ns/secrets: (1.595698ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.352044  117445 wrap.go:47] POST /api/v1/namespaces/ns/configmaps: (1.339898ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.353843  117445 wrap.go:47] POST /api/v1/namespaces/ns/configmaps: (1.342271ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.355785  117445 wrap.go:47] POST /apis/storage.k8s.io/v1beta1/volumeattachments: (1.411537ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.357746  117445 wrap.go:47] GET /api/v1/namespaces/ns/limitranges: (981.571µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.358013  117445 admission.go:108] no storage class for claim mypvc (generate: )
I0111 22:09:13.359374  117445 wrap.go:47] POST /api/v1/namespaces/ns/persistentvolumeclaims: (3.184997ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.361589  117445 wrap.go:47] POST /api/v1/persistentvolumes: (1.625043ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.367888  117445 wrap.go:47] POST /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions: (2.445599ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.368598  117445 wrap.go:47] GET /apis/csi.storage.k8s.io/v1alpha1?timeout=32s: (158.567µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.368652  117445 naming_controller.go:334] Adding csinodeinfos.csi.storage.k8s.io
I0111 22:09:13.368675  117445 customresource_discovery_controller.go:249] Adding customresourcedefinition csinodeinfos.csi.storage.k8s.io
I0111 22:09:13.370801  117445 wrap.go:47] POST /apis/apiregistration.k8s.io/v1/apiservices: (1.480885ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.371206  117445 apiservice_controller.go:141] Adding v1alpha1.csi.storage.k8s.io
I0111 22:09:13.371323  117445 available_controller.go:367] Adding v1alpha1.csi.storage.k8s.io
I0111 22:09:13.373002  117445 wrap.go:47] PUT /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/csinodeinfos.csi.storage.k8s.io/status: (3.485344ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.373139  117445 customresource_discovery_controller.go:256] Updating customresourcedefinition csinodeinfos.csi.storage.k8s.io
I0111 22:09:13.373214  117445 naming_controller.go:340] Updating csinodeinfos.csi.storage.k8s.io
I0111 22:09:13.376317  117445 wrap.go:47] PUT /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/csinodeinfos.csi.storage.k8s.io/status: (2.622241ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.376505  117445 naming_controller.go:340] Updating csinodeinfos.csi.storage.k8s.io
I0111 22:09:13.376505  117445 customresource_discovery_controller.go:256] Updating customresourcedefinition csinodeinfos.csi.storage.k8s.io
I0111 22:09:13.870612  117445 wrap.go:47] GET /apis/csi.storage.k8s.io/v1alpha1?timeout=32s: (388.37µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.874337  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mysecret", Reason: ""
I0111 22:09:13.874446  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mysecret: (354.409µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.875212  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mypvsecret", Reason: ""
I0111 22:09:13.875303  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mypvsecret: (246.855µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.875937  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/configmaps/myconfigmap", Reason: ""
I0111 22:09:13.876019  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmap: (201.143µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.877452  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/persistentvolumeclaims/mypvc", Reason: ""
I0111 22:09:13.877562  117445 wrap.go:47] GET /api/v1/namespaces/ns/persistentvolumeclaims/mypvc: (226.685µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.878075  117445 authorization.go:73] Forbidden: "/api/v1/persistentvolumes/mypv", Reason: ""
I0111 22:09:13.878132  117445 wrap.go:47] GET /api/v1/persistentvolumes/mypv: (163.149µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.878830  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods", Reason: ""
I0111 22:09:13.878911  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (196.385µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.879643  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods", Reason: ""
I0111 22:09:13.879702  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (176.683µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.880353  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods/node2normalpod", Reason: ""
I0111 22:09:13.880439  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2normalpod: (188.362µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.885951  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods/node2mirrorpod", Reason: ""
I0111 22:09:13.886033  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (238.89µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.886952  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods/node2mirrorpod/eviction", Reason: ""
I0111 22:09:13.887026  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (211.524µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.888125  117445 authorization.go:73] Forbidden: "/api/v1/nodes", Reason: ""
I0111 22:09:13.888186  117445 wrap.go:47] POST /api/v1/nodes: (164.539µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.888693  117445 authorization.go:73] Forbidden: "/api/v1/nodes/node2/status", Reason: ""
I0111 22:09:13.888754  117445 wrap.go:47] PUT /api/v1/nodes/node2/status: (156.826µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.889236  117445 authorization.go:73] Forbidden: "/api/v1/nodes/node2", Reason: ""
I0111 22:09:13.889294  117445 wrap.go:47] DELETE /api/v1/nodes/node2: (157.014µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.889905  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get secret ns/mysecret
I0111 22:09:13.889965  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mysecret", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.890024  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mysecret: (222.98µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.890421  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get secret ns/mypvsecret
I0111 22:09:13.890466  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mypvsecret", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.890555  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mypvsecret: (192.436µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.891029  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get configmap ns/myconfigmap
I0111 22:09:13.891075  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/configmaps/myconfigmap", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.891123  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmap: (157.528µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.898845  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get pvc ns/mypvc
I0111 22:09:13.898935  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/persistentvolumeclaims/mypvc", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.899006  117445 wrap.go:47] GET /api/v1/namespaces/ns/persistentvolumeclaims/mypvc: (293.273µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.899659  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get pv /mypv
I0111 22:09:13.899728  117445 authorization.go:73] Forbidden: "/api/v1/persistentvolumes/mypv", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.899784  117445 wrap.go:47] GET /api/v1/persistentvolumes/mypv: (210.02µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.900401  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods", Reason: ""
I0111 22:09:13.900491  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (214.109µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.901713  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (677.819µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.904064  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (585.285µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.905203  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (481.847µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.908466  117445 wrap.go:47] POST /api/v1/nodes: (479.567µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.910609  117445 wrap.go:47] PUT /api/v1/nodes/node2/status: (1.289861ms) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.911209  117445 authorization.go:73] Forbidden: "/api/v1/nodes/node2", Reason: ""
I0111 22:09:13.911289  117445 wrap.go:47] DELETE /api/v1/nodes/node2: (240.106µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.911918  117445 node_authorizer.go:198] NODE DENY: node "node2" cannot get unknown secret ns/mysecret
I0111 22:09:13.912016  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mysecret", Reason: "no relationship found between node \"node2\" and this object"
I0111 22:09:13.912083  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mysecret: (268.965µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.912619  117445 node_authorizer.go:198] NODE DENY: node "node2" cannot get secret ns/mypvsecret, no relationship to this object was found in the node authorizer graph
I0111 22:09:13.912689  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mypvsecret", Reason: "no relationship found between node \"node2\" and this object"
I0111 22:09:13.912760  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mypvsecret: (282.564µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.913357  117445 node_authorizer.go:198] NODE DENY: node "node2" cannot get unknown configmap ns/myconfigmap
I0111 22:09:13.913419  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/configmaps/myconfigmap", Reason: "no relationship found between node \"node2\" and this object"
I0111 22:09:13.913518  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmap: (241.673µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.914063  117445 node_authorizer.go:198] NODE DENY: node "node2" cannot get pvc ns/mypvc, no relationship to this object was found in the node authorizer graph
I0111 22:09:13.914144  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/persistentvolumeclaims/mypvc", Reason: "no relationship found between node \"node2\" and this object"
I0111 22:09:13.914241  117445 wrap.go:47] GET /api/v1/namespaces/ns/persistentvolumeclaims/mypvc: (278.467µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.914801  117445 node_authorizer.go:198] NODE DENY: node "node2" cannot get pv /mypv, no relationship to this object was found in the node authorizer graph
I0111 22:09:13.914871  117445 authorization.go:73] Forbidden: "/api/v1/persistentvolumes/mypv", Reason: "no relationship found between node \"node2\" and this object"
I0111 22:09:13.914944  117445 wrap.go:47] GET /api/v1/persistentvolumes/mypv: (225.073µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.915613  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods", Reason: ""
I0111 22:09:13.915688  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (222.073µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.918205  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.062138ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.918497  117445 graph_populator.go:150] updatePod ns/node2mirrorpod for node node2
I0111 22:09:13.922887  117445 graph_populator.go:167] deletePod ns/node2mirrorpod for node node2
I0111 22:09:13.923285  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (4.297444ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.926041  117445 graph_populator.go:150] updatePod ns/node2mirrorpod for node node2
I0111 22:09:13.926801  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.567042ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.930975  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (2.915425ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.934090  117445 wrap.go:47] POST /api/v1/nodes: (1.485639ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.936251  117445 wrap.go:47] PUT /api/v1/nodes/node2/status: (1.567282ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.936939  117445 authorization.go:73] Forbidden: "/api/v1/nodes/node2", Reason: ""
I0111 22:09:13.937012  117445 wrap.go:47] DELETE /api/v1/nodes/node2: (283.441µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.940130  117445 wrap.go:47] DELETE /api/v1/nodes/node2: (2.661706ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.942798  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.047582ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.942858  117445 graph_populator.go:150] updatePod ns/node2normalpod for node node2
I0111 22:09:13.943459  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mysecret", Reason: ""
I0111 22:09:13.943592  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mysecret: (329.965µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.944146  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mypvsecret", Reason: ""
I0111 22:09:13.944225  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mypvsecret: (203.074µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.944924  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/configmaps/myconfigmap", Reason: ""
I0111 22:09:13.945000  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmap: (201.659µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.945623  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/persistentvolumeclaims/mypvc", Reason: ""
I0111 22:09:13.945695  117445 wrap.go:47] GET /api/v1/namespaces/ns/persistentvolumeclaims/mypvc: (249.916µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.946199  117445 authorization.go:73] Forbidden: "/api/v1/persistentvolumes/mypv", Reason: ""
I0111 22:09:13.946267  117445 wrap.go:47] GET /api/v1/persistentvolumes/mypv: (168.404µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.946888  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods", Reason: ""
I0111 22:09:13.946946  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (165.253µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.947496  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods/node2normalpod/status", Reason: ""
I0111 22:09:13.947581  117445 wrap.go:47] PUT /api/v1/namespaces/ns/pods/node2normalpod/status: (209.906µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.948071  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods/node2normalpod", Reason: ""
I0111 22:09:13.948144  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2normalpod: (175.44µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.948687  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods/node2normalpod/eviction", Reason: ""
I0111 22:09:13.948757  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2normalpod/eviction: (163.113µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.949359  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods", Reason: ""
I0111 22:09:13.949408  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (141.141µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.949865  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods/node2mirrorpod", Reason: ""
I0111 22:09:13.949915  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (158.096µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.950435  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/pods/node2mirrorpod/eviction", Reason: ""
I0111 22:09:13.950549  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (223.232µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.951014  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get secret ns/mysecret
I0111 22:09:13.951067  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mysecret", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.951114  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mysecret: (168.818µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.951441  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get secret ns/mypvsecret
I0111 22:09:13.951512  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/secrets/mypvsecret", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.951579  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mypvsecret: (185.674µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.952054  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get configmap ns/myconfigmap
I0111 22:09:13.952107  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/configmaps/myconfigmap", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.952178  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmap: (188.015µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.952743  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get pvc ns/mypvc
I0111 22:09:13.952820  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/persistentvolumeclaims/mypvc", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.952889  117445 wrap.go:47] GET /api/v1/namespaces/ns/persistentvolumeclaims/mypvc: (238.058µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.953295  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get pv /mypv
I0111 22:09:13.953351  117445 authorization.go:73] Forbidden: "/api/v1/persistentvolumes/mypv", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:09:13.953408  117445 wrap.go:47] GET /api/v1/persistentvolumes/mypv: (178.337µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.954384  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (562.145µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.955594  117445 wrap.go:47] PUT /api/v1/namespaces/ns/pods/node2normalpod/status: (696.525µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.956538  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2normalpod: (501.64µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.957344  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2normalpod/eviction: (352.453µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.958252  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (370.636µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:13.959132  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (335.777µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:14.961995  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (739.365µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:15.960693  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (682.356µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:16.960902  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (693.1µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:17.960775  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (743.27µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:18.960733  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (752.627µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:19.960674  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (601.186µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:20.122465  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.458749ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:20.124514  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.409809ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:20.128228  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.013441ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:09:20.129201  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:09:20.130180  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (627.604µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:09:20.130513  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:09:20.960619  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (622.508µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:21.960776  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (705.965µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:22.960790  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (725.721µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:23.960709  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (721.862µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:24.960659  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (702.229µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:25.960689  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (659.816µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:26.960727  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (723.181µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:27.960676  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (657.907µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:28.960662  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (665.555µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:29.960632  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (674.508µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:30.136152  117445 wrap.go:47] GET /api/v1/namespaces/default: (5.110795ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:30.142542  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (5.787666ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:30.155820  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.189819ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:09:30.157784  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:09:30.158812  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (716.922µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:09:30.159083  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:09:30.960886  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (745.699µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:31.960756  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (736.273µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:32.960653  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (728.466µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:33.960690  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (692.367µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:34.960780  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (695.541µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:35.960631  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (693.867µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:36.960702  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (714.998µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:37.960733  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (672.519µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:38.960631  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (647.966µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:39.960774  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (752.981µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:40.070245  117445 reflector.go:215] k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117: forcing resync
I0111 22:09:40.070517  117445 available_controller.go:373] Updating v1beta1.coordination.k8s.io
I0111 22:09:40.070572  117445 available_controller.go:373] Updating v1beta1.rbac.authorization.k8s.io
I0111 22:09:40.070593  117445 available_controller.go:373] Updating v1alpha1.storage.k8s.io
I0111 22:09:40.070632  117445 available_controller.go:373] Updating v1.
I0111 22:09:40.070662  117445 available_controller.go:373] Updating v1alpha1.auditregistration.k8s.io
I0111 22:09:40.070683  117445 available_controller.go:373] Updating v2alpha1.batch
I0111 22:09:40.070691  117445 available_controller.go:373] Updating v1.coordination.k8s.io
I0111 22:09:40.070713  117445 available_controller.go:373] Updating v1.networking.k8s.io
I0111 22:09:40.070734  117445 available_controller.go:373] Updating v1alpha1.scheduling.k8s.io
I0111 22:09:40.070763  117445 available_controller.go:373] Updating v1.apps
I0111 22:09:40.070888  117445 available_controller.go:373] Updating v1beta1.authentication.k8s.io
I0111 22:09:40.070910  117445 available_controller.go:373] Updating v2beta2.autoscaling
I0111 22:09:40.070960  117445 available_controller.go:373] Updating v1.batch
I0111 22:09:40.070973  117445 available_controller.go:373] Updating v1alpha1.admissionregistration.k8s.io
I0111 22:09:40.070997  117445 available_controller.go:373] Updating v1beta1.events.k8s.io
I0111 22:09:40.071007  117445 available_controller.go:373] Updating v1.storage.k8s.io
I0111 22:09:40.071037  117445 available_controller.go:373] Updating v1beta1.storage.k8s.io
I0111 22:09:40.071050  117445 available_controller.go:373] Updating v1.rbac.authorization.k8s.io
I0111 22:09:40.071080  117445 available_controller.go:373] Updating v1beta1.scheduling.k8s.io
I0111 22:09:40.071093  117445 available_controller.go:373] Updating v1beta1.apiextensions.k8s.io
I0111 22:09:40.071126  117445 available_controller.go:373] Updating v1beta1.apps
I0111 22:09:40.071135  117445 available_controller.go:373] Updating v1.authentication.k8s.io
I0111 22:09:40.071162  117445 available_controller.go:373] Updating v1beta1.certificates.k8s.io
I0111 22:09:40.071175  117445 available_controller.go:373] Updating v2beta1.autoscaling
I0111 22:09:40.071201  117445 available_controller.go:373] Updating v1beta2.apps
I0111 22:09:40.071222  117445 available_controller.go:373] Updating v1alpha1.rbac.authorization.k8s.io
I0111 22:09:40.071251  117445 available_controller.go:373] Updating v1alpha1.csi.storage.k8s.io
I0111 22:09:40.071263  117445 available_controller.go:373] Updating v1.authorization.k8s.io
I0111 22:09:40.071288  117445 available_controller.go:373] Updating v1.autoscaling
I0111 22:09:40.071300  117445 available_controller.go:373] Updating v1beta1.policy
I0111 22:09:40.071331  117445 available_controller.go:373] Updating v1alpha1.settings.k8s.io
I0111 22:09:40.071348  117445 available_controller.go:373] Updating v1beta1.admissionregistration.k8s.io
I0111 22:09:40.071375  117445 available_controller.go:373] Updating v1beta1.authorization.k8s.io
I0111 22:09:40.071381  117445 available_controller.go:373] Updating v1beta1.batch
I0111 22:09:40.071402  117445 available_controller.go:373] Updating v1beta1.extensions
I0111 22:09:40.161322  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.648935ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:40.163236  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.266682ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:40.166944  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.054077ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:09:40.167930  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:09:40.168904  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (707.559µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:09:40.169262  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:09:40.960632  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (631.023µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:41.960686  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (717.605µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:42.960657  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (686.612µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:43.960599  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (590.499µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:43.961621  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (480.119µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:43.962631  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (518.41µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:44.964243  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (605.35µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:45.964074  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (562.055µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:46.964347  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (608.866µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:47.964073  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (557.199µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:48.964237  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (601.241µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:49.964273  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (657.728µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:50.171188  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.370185ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:50.173146  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.293687ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:50.177278  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.082933ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:09:50.178428  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:09:50.179405  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (639.286µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:09:50.179711  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:09:50.964202  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (587.896µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:51.964289  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (663.807µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:52.964258  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (619.837µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:53.964432  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (766.184µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:54.964215  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (575.617µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:55.964329  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (710.44µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:56.964046  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (492.179µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:57.969389  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (644.154µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:58.964162  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (519.6µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:09:59.964255  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (643.995µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:00.181927  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.520314ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:00.183651  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.251298ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:00.187459  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.165993ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:10:00.188508  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:10:00.190126  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (710.488µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:10:00.190550  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:10:00.964080  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (554.141µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:01.964386  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (591.536µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:02.964262  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (633.574µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:03.964223  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (612.51µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:04.964240  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (578.879µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:05.964390  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (730.129µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:06.964463  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (600.077µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:07.964598  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (771.879µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:08.965238  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (719.857µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:09.964441  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (749.577µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:10.070590  117445 reflector.go:215] k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117: forcing resync
I0111 22:10:10.070757  117445 available_controller.go:373] Updating v2beta1.autoscaling
I0111 22:10:10.070795  117445 available_controller.go:373] Updating v1beta2.apps
I0111 22:10:10.070855  117445 available_controller.go:373] Updating v1alpha1.rbac.authorization.k8s.io
I0111 22:10:10.070893  117445 available_controller.go:373] Updating v1.authorization.k8s.io
I0111 22:10:10.070917  117445 available_controller.go:373] Updating v1.autoscaling
I0111 22:10:10.070960  117445 available_controller.go:373] Updating v1beta1.policy
I0111 22:10:10.071006  117445 available_controller.go:373] Updating v1alpha1.settings.k8s.io
I0111 22:10:10.071024  117445 available_controller.go:373] Updating v1alpha1.csi.storage.k8s.io
I0111 22:10:10.071073  117445 available_controller.go:373] Updating v1beta1.admissionregistration.k8s.io
I0111 22:10:10.071282  117445 available_controller.go:373] Updating v1beta1.authorization.k8s.io
I0111 22:10:10.071299  117445 available_controller.go:373] Updating v1beta1.batch
I0111 22:10:10.071341  117445 available_controller.go:373] Updating v1beta1.extensions
I0111 22:10:10.071354  117445 available_controller.go:373] Updating v1.
I0111 22:10:10.071384  117445 available_controller.go:373] Updating v1alpha1.auditregistration.k8s.io
I0111 22:10:10.071396  117445 available_controller.go:373] Updating v2alpha1.batch
I0111 22:10:10.071423  117445 available_controller.go:373] Updating v1.coordination.k8s.io
I0111 22:10:10.071435  117445 available_controller.go:373] Updating v1beta1.coordination.k8s.io
I0111 22:10:10.071463  117445 available_controller.go:373] Updating v1beta1.rbac.authorization.k8s.io
I0111 22:10:10.071509  117445 available_controller.go:373] Updating v1alpha1.storage.k8s.io
I0111 22:10:10.071589  117445 available_controller.go:373] Updating v1.apps
I0111 22:10:10.071606  117445 available_controller.go:373] Updating v1beta1.authentication.k8s.io
I0111 22:10:10.071626  117445 available_controller.go:373] Updating v2beta2.autoscaling
I0111 22:10:10.071639  117445 available_controller.go:373] Updating v1.batch
I0111 22:10:10.071660  117445 available_controller.go:373] Updating v1.networking.k8s.io
I0111 22:10:10.071673  117445 available_controller.go:373] Updating v1alpha1.scheduling.k8s.io
I0111 22:10:10.071702  117445 available_controller.go:373] Updating v1alpha1.admissionregistration.k8s.io
I0111 22:10:10.071715  117445 available_controller.go:373] Updating v1beta1.events.k8s.io
I0111 22:10:10.071765  117445 available_controller.go:373] Updating v1.storage.k8s.io
I0111 22:10:10.071778  117445 available_controller.go:373] Updating v1beta1.storage.k8s.io
I0111 22:10:10.071816  117445 available_controller.go:373] Updating v1beta1.apiextensions.k8s.io
I0111 22:10:10.071830  117445 available_controller.go:373] Updating v1beta1.apps
I0111 22:10:10.071861  117445 available_controller.go:373] Updating v1.authentication.k8s.io
I0111 22:10:10.071875  117445 available_controller.go:373] Updating v1beta1.certificates.k8s.io
I0111 22:10:10.071902  117445 available_controller.go:373] Updating v1.rbac.authorization.k8s.io
I0111 22:10:10.071915  117445 available_controller.go:373] Updating v1beta1.scheduling.k8s.io
I0111 22:10:10.124847  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.753178ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:10.126776  117445 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.353416ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:10.128823  117445 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.515779ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:10.193079  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.859242ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:10.195469  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.661716ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:10.200014  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.345288ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:10:10.201057  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:10:10.202350  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (930.971µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:10:10.202841  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:10:10.964401  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (623.841µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:11.964378  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (705.118µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:12.964342  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (607.641µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.964419  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (741.469µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.965760  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (608.966µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.968378  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mysecret: (1.66302ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.970763  117445 wrap.go:47] GET /api/v1/namespaces/ns/secrets/mypvsecret: (1.75725ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.972771  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmap: (1.431011ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.974725  117445 wrap.go:47] GET /api/v1/namespaces/ns/persistentvolumeclaims/mypvc: (1.437519ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.976676  117445 wrap.go:47] GET /api/v1/persistentvolumes/mypv: (1.443108ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.979308  117445 wrap.go:47] GET /api/v1/namespaces/ns/limitranges: (1.301416ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.979713  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.413407ms) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.982701  117445 wrap.go:47] PUT /api/v1/namespaces/ns/pods/node2normalpod/status: (2.339807ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.987713  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2normalpod: (4.29761ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.987777  117445 graph_populator.go:167] deletePod ns/node2normalpod for node node2
I0111 22:10:13.990945  117445 wrap.go:47] GET /api/v1/namespaces/ns/resourcequotas: (1.471694ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:13.993703  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (5.218541ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:14.998581  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.393277ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:15.998069  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.11476ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:16.998165  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.24502ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:18.001443  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.378816ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:18.998630  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.479845ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:19.997861  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.963121ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:20.204974  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.596682ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:20.207212  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.508889ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:20.211225  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.049665ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:10:20.212261  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:10:20.213220  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (648.613µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:10:20.213657  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:10:20.998038  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.118519ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:21.997932  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.067308ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:23.004710  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (9.77706ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:23.998164  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.208487ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:24.998574  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.488368ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:25.997829  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.87054ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:26.997365  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.535392ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:27.997967  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.074194ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:28.997864  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.834743ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:29.997733  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.789133ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:30.215559  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.344532ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:30.217504  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.280287ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:30.221029  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (991.841µs) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:10:30.221971  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:10:30.222905  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (600.781µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:10:30.223152  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:10:30.997567  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.604064ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:31.997935  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.977705ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:32.997376  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.513788ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:33.997888  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.952435ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:34.998004  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.076595ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:35.997835  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.836767ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:36.997451  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.561767ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:38.001447  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (6.557699ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:38.997236  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.334854ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:39.999338  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (4.372555ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:40.070940  117445 reflector.go:215] k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117: forcing resync
I0111 22:10:40.071114  117445 available_controller.go:373] Updating v1.apps
I0111 22:10:40.071140  117445 available_controller.go:373] Updating v1beta1.authentication.k8s.io
I0111 22:10:40.071216  117445 available_controller.go:373] Updating v2beta2.autoscaling
I0111 22:10:40.071231  117445 available_controller.go:373] Updating v1.batch
I0111 22:10:40.071333  117445 available_controller.go:373] Updating v1.networking.k8s.io
I0111 22:10:40.071388  117445 available_controller.go:373] Updating v1alpha1.scheduling.k8s.io
I0111 22:10:40.071454  117445 available_controller.go:373] Updating v1alpha1.admissionregistration.k8s.io
I0111 22:10:40.071609  117445 available_controller.go:373] Updating v1beta1.events.k8s.io
I0111 22:10:40.071644  117445 available_controller.go:373] Updating v1.storage.k8s.io
I0111 22:10:40.071693  117445 available_controller.go:373] Updating v1beta1.storage.k8s.io
I0111 22:10:40.071714  117445 available_controller.go:373] Updating v1beta1.apiextensions.k8s.io
I0111 22:10:40.071761  117445 available_controller.go:373] Updating v1beta1.apps
I0111 22:10:40.071781  117445 available_controller.go:373] Updating v1.authentication.k8s.io
I0111 22:10:40.071809  117445 available_controller.go:373] Updating v1beta1.certificates.k8s.io
I0111 22:10:40.071821  117445 available_controller.go:373] Updating v1.rbac.authorization.k8s.io
I0111 22:10:40.071855  117445 available_controller.go:373] Updating v1beta1.scheduling.k8s.io
I0111 22:10:40.071869  117445 available_controller.go:373] Updating v2beta1.autoscaling
I0111 22:10:40.071897  117445 available_controller.go:373] Updating v1beta2.apps
I0111 22:10:40.071908  117445 available_controller.go:373] Updating v1alpha1.rbac.authorization.k8s.io
I0111 22:10:40.071939  117445 available_controller.go:373] Updating v1.authorization.k8s.io
I0111 22:10:40.071951  117445 available_controller.go:373] Updating v1.autoscaling
I0111 22:10:40.071978  117445 available_controller.go:373] Updating v1beta1.policy
I0111 22:10:40.071989  117445 available_controller.go:373] Updating v1alpha1.settings.k8s.io
I0111 22:10:40.072017  117445 available_controller.go:373] Updating v1alpha1.csi.storage.k8s.io
I0111 22:10:40.072029  117445 available_controller.go:373] Updating v1beta1.admissionregistration.k8s.io
I0111 22:10:40.072062  117445 available_controller.go:373] Updating v1beta1.authorization.k8s.io
I0111 22:10:40.072073  117445 available_controller.go:373] Updating v1beta1.batch
I0111 22:10:40.072101  117445 available_controller.go:373] Updating v1beta1.extensions
I0111 22:10:40.072112  117445 available_controller.go:373] Updating v1alpha1.storage.k8s.io
I0111 22:10:40.072145  117445 available_controller.go:373] Updating v1.
I0111 22:10:40.072157  117445 available_controller.go:373] Updating v1alpha1.auditregistration.k8s.io
I0111 22:10:40.072190  117445 available_controller.go:373] Updating v2alpha1.batch
I0111 22:10:40.072201  117445 available_controller.go:373] Updating v1.coordination.k8s.io
I0111 22:10:40.072242  117445 available_controller.go:373] Updating v1beta1.coordination.k8s.io
I0111 22:10:40.072258  117445 available_controller.go:373] Updating v1beta1.rbac.authorization.k8s.io
I0111 22:10:40.225526  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.733188ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:40.227504  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.351081ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:40.230992  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.027332ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:10:40.232114  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:10:40.233168  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (720.632µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:10:40.233439  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:10:40.997905  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.040123ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:41.997230  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.352491ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:42.997881  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.013233ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:43.997005  117445 wrap.go:47] GET /api/v1/namespaces/ns/limitranges: (1.229682ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:43.999684  117445 wrap.go:47] GET /api/v1/namespaces/ns/resourcequotas: (1.155847ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:44.001417  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (6.478059ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:44.005225  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.942339ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:44.009870  117445 wrap.go:47] DELETE /api/v1/namespaces/ns/pods/node2mirrorpod: (3.945658ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:44.009978  117445 graph_populator.go:167] deletePod ns/node2mirrorpod for node node2
I0111 22:10:44.012579  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.097032ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:44.013064  117445 graph_populator.go:150] updatePod ns/node2normalpod for node node2
I0111 22:10:44.015005  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (1.805723ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:44.016065  117445 graph_populator.go:150] updatePod ns/node2mirrorpod for node node2
I0111 22:10:44.018605  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2normalpod/eviction: (2.920461ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:44.021900  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods/node2mirrorpod/eviction: (2.803085ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:44.024561  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.162305ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:45.028554  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.741086ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:46.028294  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.718165ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:47.029172  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.579085ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:48.028774  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.181979ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:49.029075  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.446218ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:50.028323  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.782494ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:50.235241  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.294965ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:50.237164  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.294697ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:50.240874  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.06328ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:10:50.241899  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:10:50.242866  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (707.352µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:10:50.243230  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:10:51.028337  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.753245ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:52.028340  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.766058ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:53.028253  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.741572ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:54.036982  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (11.403402ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:55.028205  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.758708ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:56.028176  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.566432ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:57.028462  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.870446ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:58.029704  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (4.167258ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:10:59.029496  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (4.026741ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:00.028797  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.174229ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:00.245335  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.613572ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:00.248654  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (2.496655ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:00.257831  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.019693ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:11:00.258785  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:11:00.259743  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (684.437µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:11:00.259993  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:11:01.028658  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.042149ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:02.028433  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.904564ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:03.029802  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (4.320402ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:04.028316  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.692383ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:05.028690  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.059843ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:06.027912  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.444361ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:07.028365  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.7945ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:08.030509  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.384505ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:09.028198  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.687433ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:10.028873  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.154481ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:10.071226  117445 reflector.go:215] k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117: forcing resync
I0111 22:11:10.071414  117445 available_controller.go:373] Updating v2beta1.autoscaling
I0111 22:11:10.071439  117445 available_controller.go:373] Updating v1beta2.apps
I0111 22:11:10.071578  117445 available_controller.go:373] Updating v1alpha1.rbac.authorization.k8s.io
I0111 22:11:10.071635  117445 available_controller.go:373] Updating v1.authorization.k8s.io
I0111 22:11:10.071720  117445 available_controller.go:373] Updating v1.autoscaling
I0111 22:11:10.071738  117445 available_controller.go:373] Updating v1beta1.policy
I0111 22:11:10.071752  117445 available_controller.go:373] Updating v1alpha1.settings.k8s.io
I0111 22:11:10.071788  117445 available_controller.go:373] Updating v1alpha1.csi.storage.k8s.io
I0111 22:11:10.071799  117445 available_controller.go:373] Updating v1beta1.admissionregistration.k8s.io
I0111 22:11:10.071808  117445 available_controller.go:373] Updating v1beta1.authorization.k8s.io
I0111 22:11:10.071833  117445 available_controller.go:373] Updating v1beta1.batch
I0111 22:11:10.071870  117445 available_controller.go:373] Updating v1beta1.extensions
I0111 22:11:10.071897  117445 available_controller.go:373] Updating v1alpha1.storage.k8s.io
I0111 22:11:10.071935  117445 available_controller.go:373] Updating v1.
I0111 22:11:10.071950  117445 available_controller.go:373] Updating v1alpha1.auditregistration.k8s.io
I0111 22:11:10.071974  117445 available_controller.go:373] Updating v2alpha1.batch
I0111 22:11:10.071992  117445 available_controller.go:373] Updating v1.coordination.k8s.io
I0111 22:11:10.072028  117445 available_controller.go:373] Updating v1beta1.coordination.k8s.io
I0111 22:11:10.072045  117445 available_controller.go:373] Updating v1beta1.rbac.authorization.k8s.io
I0111 22:11:10.072069  117445 available_controller.go:373] Updating v1.apps
I0111 22:11:10.072075  117445 available_controller.go:373] Updating v1beta1.authentication.k8s.io
I0111 22:11:10.072097  117445 available_controller.go:373] Updating v2beta2.autoscaling
I0111 22:11:10.072108  117445 available_controller.go:373] Updating v1.batch
I0111 22:11:10.072128  117445 available_controller.go:373] Updating v1.networking.k8s.io
I0111 22:11:10.072142  117445 available_controller.go:373] Updating v1alpha1.scheduling.k8s.io
I0111 22:11:10.072173  117445 available_controller.go:373] Updating v1alpha1.admissionregistration.k8s.io
I0111 22:11:10.072188  117445 available_controller.go:373] Updating v1beta1.events.k8s.io
I0111 22:11:10.072222  117445 available_controller.go:373] Updating v1.storage.k8s.io
I0111 22:11:10.072246  117445 available_controller.go:373] Updating v1beta1.storage.k8s.io
I0111 22:11:10.072266  117445 available_controller.go:373] Updating v1beta1.apiextensions.k8s.io
I0111 22:11:10.072290  117445 available_controller.go:373] Updating v1beta1.apps
I0111 22:11:10.072324  117445 available_controller.go:373] Updating v1.authentication.k8s.io
I0111 22:11:10.072348  117445 available_controller.go:373] Updating v1beta1.certificates.k8s.io
I0111 22:11:10.072380  117445 available_controller.go:373] Updating v1.rbac.authorization.k8s.io
I0111 22:11:10.072401  117445 available_controller.go:373] Updating v1beta1.scheduling.k8s.io
I0111 22:11:10.131498  117445 wrap.go:47] GET /api/v1/namespaces/kube-system: (1.742837ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:10.133943  117445 wrap.go:47] GET /api/v1/namespaces/kube-public: (1.810665ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:10.135636  117445 wrap.go:47] GET /api/v1/namespaces/kube-node-lease: (1.200802ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:10.262104  117445 wrap.go:47] GET /api/v1/namespaces/default: (1.510211ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:10.264050  117445 wrap.go:47] GET /api/v1/namespaces/default/services/kubernetes: (1.280058ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:10.267817  117445 wrap.go:47] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.071485ms) 404 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:11:10.268950  117445 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
I0111 22:11:10.269854  117445 wrap.go:47] POST /api/v1/namespaces/default/endpoints: (677.974µs) 422 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
E0111 22:11:10.270201  117445 controller.go:204] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
I0111 22:11:11.028578  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (3.001944ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:12.028579  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.938658ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:13.028660  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (2.704397ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.027651  117445 wrap.go:47] GET /api/v1/namespaces/ns/limitranges: (1.26775ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.029608  117445 wrap.go:47] GET /api/v1/namespaces/ns/resourcequotas: (1.138748ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.031769  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (6.230788ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.037743  117445 wrap.go:47] POST /api/v1/namespaces/ns/pods: (5.332833ms) 409 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.038241  117445 feature_gate.go:226] feature gates: &{map[CSIPersistentVolume:true DynamicKubeletConfig:true NodeLease:true CSINodeInfo:true ExpandPersistentVolumes:false]}
I0111 22:11:14.038731  117445 node_authorizer.go:162] NODE DENY: node1 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc0349949c0), Verb:"patch", Namespace:"ns", APIGroup:"", APIVersion:"v1", Resource:"persistentvolumeclaims", Subresource:"status", Name:"mypvc", ResourceRequest:true, Path:"/api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status"}
I0111 22:11:14.038891  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status", Reason: "can only get individual resources of this type"
I0111 22:11:14.038985  117445 wrap.go:47] PATCH /api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status: (358.787µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.039586  117445 node_authorizer.go:162] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"patch", Namespace:"ns", APIGroup:"", APIVersion:"v1", Resource:"persistentvolumeclaims", Subresource:"status", Name:"mypvc", ResourceRequest:true, Path:"/api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status"}
I0111 22:11:14.039684  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status", Reason: "can only get individual resources of this type"
I0111 22:11:14.039754  117445 wrap.go:47] PATCH /api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status: (283.5µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.040077  117445 feature_gate.go:226] feature gates: &{map[CSIPersistentVolume:true DynamicKubeletConfig:true NodeLease:true CSINodeInfo:true ExpandPersistentVolumes:true]}
I0111 22:11:14.040390  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get pvc ns/mypvc
I0111 22:11:14.040433  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:11:14.040519  117445 wrap.go:47] PATCH /api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status: (200.034µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.043704  117445 wrap.go:47] PATCH /api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status: (2.726494ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.045349  117445 wrap.go:47] PATCH /api/v1/namespaces/ns/persistentvolumeclaims/mypvc/status: (1.15638ms) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:11:14.045708  117445 feature_gate.go:218] Setting GA feature gate CSIPersistentVolume=false. It will be removed in a future release.
I0111 22:11:14.045731  117445 feature_gate.go:226] feature gates: &{map[CSIPersistentVolume:false DynamicKubeletConfig:true NodeLease:true CSINodeInfo:true ExpandPersistentVolumes:true]}
I0111 22:11:14.046124  117445 authorization.go:73] Forbidden: "/apis/storage.k8s.io/v1beta1/volumeattachments/myattachment", Reason: "disabled by feature gate CSIPersistentVolume"
I0111 22:11:14.046231  117445 wrap.go:47] GET /apis/storage.k8s.io/v1beta1/volumeattachments/myattachment: (226.755µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.046955  117445 authorization.go:73] Forbidden: "/apis/storage.k8s.io/v1beta1/volumeattachments/myattachment", Reason: "disabled by feature gate CSIPersistentVolume"
I0111 22:11:14.047033  117445 wrap.go:47] GET /apis/storage.k8s.io/v1beta1/volumeattachments/myattachment: (249.057µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:11:14.047308  117445 feature_gate.go:218] Setting GA feature gate CSIPersistentVolume=true. It will be removed in a future release.
I0111 22:11:14.047328  117445 feature_gate.go:226] feature gates: &{map[DynamicKubeletConfig:true NodeLease:true CSINodeInfo:true ExpandPersistentVolumes:true CSIPersistentVolume:true]}
I0111 22:11:14.047693  117445 node_authorizer.go:198] NODE DENY: unknown node "node1" cannot get volumeattachment /myattachment
I0111 22:11:14.047752  117445 authorization.go:73] Forbidden: "/apis/storage.k8s.io/v1beta1/volumeattachments/myattachment", Reason: "no relationship found between node \"node1\" and this object"
I0111 22:11:14.047817  117445 wrap.go:47] GET /apis/storage.k8s.io/v1beta1/volumeattachments/myattachment: (227.2µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.049306  117445 wrap.go:47] GET /apis/storage.k8s.io/v1beta1/volumeattachments/myattachment: (1.08787ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.051294  117445 wrap.go:47] POST /api/v1/nodes: (1.475989ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.052845  117445 wrap.go:47] GET /api/v1/nodes/node2: (1.080543ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.053767  117445 wrap.go:47] PUT /api/v1/nodes/node2: (511.353µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.054303  117445 node_authorizer.go:198] NODE DENY: node "node2" cannot get unknown configmap ns/myconfigmapconfigsource
I0111 22:11:14.054374  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/configmaps/myconfigmapconfigsource", Reason: "no relationship found between node \"node2\" and this object"
I0111 22:11:14.054433  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmapconfigsource: (206.766µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.055923  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmapconfigsource: (1.058037ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.057205  117445 wrap.go:47] GET /api/v1/nodes/node2: (901.682µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.059095  117445 wrap.go:47] PUT /api/v1/nodes/node2: (1.430768ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.059453  117445 graph_populator.go:112] updateNode configSource reference to ns/myconfigmapconfigsource for node node2
I0111 22:11:14.060464  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmapconfigsource: (867.208µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.061915  117445 wrap.go:47] GET /api/v1/nodes/node2: (978.332µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.064335  117445 graph_populator.go:112] updateNode configSource reference to nil for node node2
I0111 22:11:14.064526  117445 wrap.go:47] PUT /api/v1/nodes/node2: (2.183349ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.065010  117445 node_authorizer.go:198] NODE DENY: node "node2" cannot get unknown configmap ns/myconfigmapconfigsource
I0111 22:11:14.065103  117445 authorization.go:73] Forbidden: "/api/v1/namespaces/ns/configmaps/myconfigmapconfigsource", Reason: "no relationship found between node \"node2\" and this object"
I0111 22:11:14.065183  117445 wrap.go:47] GET /api/v1/namespaces/ns/configmaps/myconfigmapconfigsource: (280.617µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.068221  117445 wrap.go:47] DELETE /api/v1/nodes/node2: (2.496898ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.069564  117445 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0111 22:11:14.070853  117445 wrap.go:47] GET /api/v1/namespaces/kube-node-lease/resourcequotas: (966.696µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.072150  117445 wrap.go:47] POST /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases: (3.302199ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.073633  117445 wrap.go:47] GET /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (1.021724ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.081214  117445 wrap.go:47] GET /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (1.256307ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.083640  117445 wrap.go:47] PUT /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (1.853445ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.086264  117445 wrap.go:47] PATCH /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (2.051889ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.089410  117445 wrap.go:47] DELETE /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (2.620175ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.090321  117445 wrap.go:47] POST /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases: (301.427µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.090849  117445 node_authorizer.go:256] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"get", Namespace:"kube-node-lease", APIGroup:"coordination.k8s.io", APIVersion:"v1beta1", Resource:"leases", Subresource:"", Name:"node1", ResourceRequest:true, Path:"/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1"}
I0111 22:11:14.090955  117445 authorization.go:73] Forbidden: "/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1", Reason: "can only access node lease with the same name as the requesting node"
I0111 22:11:14.091017  117445 wrap.go:47] GET /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (243.834µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.091558  117445 node_authorizer.go:256] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"get", Namespace:"kube-node-lease", APIGroup:"coordination.k8s.io", APIVersion:"v1beta1", Resource:"leases", Subresource:"", Name:"node1", ResourceRequest:true, Path:"/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1"}
I0111 22:11:14.091651  117445 authorization.go:73] Forbidden: "/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1", Reason: "can only access node lease with the same name as the requesting node"
I0111 22:11:14.091733  117445 wrap.go:47] GET /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (258.266µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.092216  117445 node_authorizer.go:256] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"patch", Namespace:"kube-node-lease", APIGroup:"coordination.k8s.io", APIVersion:"v1beta1", Resource:"leases", Subresource:"", Name:"node1", ResourceRequest:true, Path:"/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1"}
I0111 22:11:14.092303  117445 authorization.go:73] Forbidden: "/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1", Reason: "can only access node lease with the same name as the requesting node"
I0111 22:11:14.092386  117445 wrap.go:47] PATCH /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (244.7µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.092977  117445 node_authorizer.go:256] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"delete", Namespace:"kube-node-lease", APIGroup:"coordination.k8s.io", APIVersion:"v1beta1", Resource:"leases", Subresource:"", Name:"node1", ResourceRequest:true, Path:"/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1"}
I0111 22:11:14.093055  117445 authorization.go:73] Forbidden: "/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1", Reason: "can only access node lease with the same name as the requesting node"
I0111 22:11:14.093132  117445 wrap.go:47] DELETE /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/node1: (228.745µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.094309  117445 clientconn.go:551] parsed scheme: ""
I0111 22:11:14.094392  117445 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0111 22:11:14.094419  117445 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0111 22:11:14.094454  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:14.094775  117445 balancer_v1_wrapper.go:245] clientv3/balancer: pin "127.0.0.1:2379"
I0111 22:11:14.094835  117445 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0111 22:11:14.095118  117445 reflector.go:169] Listing and watching *unstructured.Unstructured from storage/cacher.go:/csi.storage.k8s.io/csinodeinfos
I0111 22:11:14.097828  117445 wrap.go:47] POST /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos: (3.866306ms) 201 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.100351  117445 wrap.go:47] GET /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (1.013302ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.101782  117445 wrap.go:47] GET /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (960.947µs) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.104604  117445 wrap.go:47] PUT /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (2.316187ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.107263  117445 wrap.go:47] PATCH /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (2.220881ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.111771  117445 wrap.go:47] DELETE /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (3.792146ms) 200 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.112789  117445 wrap.go:47] POST /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos: (451.993µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.113267  117445 node_authorizer.go:285] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"get", Namespace:"", APIGroup:"csi.storage.k8s.io", APIVersion:"v1alpha1", Resource:"csinodeinfos", Subresource:"", Name:"node1", ResourceRequest:true, Path:"/apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1"}
I0111 22:11:14.113437  117445 authorization.go:73] Forbidden: "/apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1", Reason: "can only access CSINodeInfo with the same name as the requesting node"
I0111 22:11:14.113564  117445 wrap.go:47] GET /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (366.272µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.114057  117445 node_authorizer.go:285] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"get", Namespace:"", APIGroup:"csi.storage.k8s.io", APIVersion:"v1alpha1", Resource:"csinodeinfos", Subresource:"", Name:"node1", ResourceRequest:true, Path:"/apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1"}
I0111 22:11:14.114120  117445 authorization.go:73] Forbidden: "/apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1", Reason: "can only access CSINodeInfo with the same name as the requesting node"
I0111 22:11:14.114189  117445 wrap.go:47] GET /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (231.166µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.114677  117445 node_authorizer.go:285] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"patch", Namespace:"", APIGroup:"csi.storage.k8s.io", APIVersion:"v1alpha1", Resource:"csinodeinfos", Subresource:"", Name:"node1", ResourceRequest:true, Path:"/apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1"}
I0111 22:11:14.114748  117445 authorization.go:73] Forbidden: "/apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1", Reason: "can only access CSINodeInfo with the same name as the requesting node"
I0111 22:11:14.114824  117445 wrap.go:47] PATCH /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (215.707µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
I0111 22:11:14.115345  117445 node_authorizer.go:285] NODE DENY: node2 &authorizer.AttributesRecord{User:(*user.DefaultInfo)(0xc034eba940), Verb:"delete", Namespace:"", APIGroup:"csi.storage.k8s.io", APIVersion:"v1alpha1", Resource:"csinodeinfos", Subresource:"", Name:"node1", ResourceRequest:true, Path:"/apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1"}
I0111 22:11:14.115412  117445 authorization.go:73] Forbidden: "/apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1", Reason: "can only access CSINodeInfo with the same name as the requesting node"
I0111 22:11:14.115515  117445 wrap.go:47] DELETE /apis/csi.storage.k8s.io/v1alpha1/csinodeinfos/node1: (243.357µs) 403 [auth.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33280]
W0111 22:11:14.115838  117445 feature_gate.go:218] Setting GA feature gate CSIPersistentVolume=false. It will be removed in a future release.
I0111 22:11:14.115858  117445 feature_gate.go:226] feature gates: &{map[NodeLease:true CSINodeInfo:true ExpandPersistentVolumes:true CSIPersistentVolume:false DynamicKubeletConfig:true]}
W0111 22:11:14.115903  117445 feature_gate.go:218] Setting GA feature gate CSIPersistentVolume=true. It will be removed in a future release.
I0111 22:11:14.115910  117445 feature_gate.go:226] feature gates: &{map[ExpandPersistentVolumes:true CSIPersistentVolume:true DynamicKubeletConfig:true NodeLease:true CSINodeInfo:true]}
I0111 22:11:14.115944  117445 feature_gate.go:226] feature gates: &{map[CSIPersistentVolume:true DynamicKubeletConfig:true NodeLease:true CSINodeInfo:true ExpandPersistentVolumes:false]}
I0111 22:11:14.115983  117445 feature_gate.go:226] feature gates: &{map[DynamicKubeletConfig:true NodeLease:true CSINodeInfo:true ExpandPersistentVolumes:true CSIPersistentVolume:true]}
I0111 22:11:14.116191  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.116381  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.116514  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.116656  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.116784  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.116870  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.116967  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.117045  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.117187  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.117302  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.117497  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.117631  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.117725  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.117984  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.118087  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.118208  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.118261  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.118362  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.118562  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.118942  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.119607  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.120009  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.120640  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.121257  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.121697  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.122125  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.122656  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123067  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123551  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123605  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123622  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123633  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123653  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123671  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123688  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123706  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123708  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123724  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123717  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123734  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123738  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123741  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123752  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123761  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123762  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123765  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123776  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123763  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123786  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123793  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123806  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123810  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123818  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123822  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123823  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.123967  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124047  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124126  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124173  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124303  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124371  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124550  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124738  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124793  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124936  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.124860  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125022  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125131  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125243  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125410  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125588  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125624  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125663  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125741  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.125879  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.126006  117445 log.go:30] transport: loopyWriter.run returning. connection error: desc = "transport is closing"
I0111 22:11:14.126136  117445 establishing_controller.go:84] Shutting down EstablishingController
I0111 22:11:14.126161  117445 customresource_discovery_controller.go:214] Shutting down DiscoveryController
I0111 22:11:14.126178  117445 naming_controller.go:295] Shutting down NamingConditionController
I0111 22:11:14.126179  117445 feature_gate.go:226] feature gates: &{map[ExpandPersistentVolumes:true CSIPersistentVolume:true DynamicKubeletConfig:true NodeLease:true CSINodeInfo:false]}
I0111 22:11:14.126195  117445 crdregistration_controller.go:143] Shutting down crd-autoregister controller
I0111 22:11:14.126219  117445 autoregister_controller.go:160] Shutting down autoregister controller
I0111 22:11:14.126226  117445 feature_gate.go:226] feature gates: &{map[ExpandPersistentVolumes:true CSIPersistentVolume:true DynamicKubeletConfig:true NodeLease:true CSINodeInfo:false]}
I0111 22:11:14.126239  117445 crd_finalizer.go:254] Shutting down CRDFinalizer
I0111 22:11:14.126256  117445 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
I0111 22:11:14.126261  117445 feature_gate.go:226] feature gates: &{map[CSIPersistentVolume:true DynamicKubeletConfig:true NodeLease:true CSINodeInfo:false ExpandPersistentVolumes:true]}
I0111 22:11:14.126272  117445 available_controller.go:328] Shutting down AvailableConditionController
W0111 22:11:14.126294  117445 feature_gate.go:218] Setting GA feature gate CSIPersistentVolume=true. It will be removed in a future release.
I0111 22:11:14.126302  117445 feature_gate.go:226] feature gates: &{map[DynamicKubeletConfig:true NodeLease:true CSINodeInfo:false ExpandPersistentVolumes:true CSIPersistentVolume:true]}
testserver.go:142: runtime-config=map[api/all:true]
testserver.go:143: Starting kube-apiserver on port 42917...
testserver.go:155: Waiting for /healthz to be ok...
wait.go:279: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
node_test.go:565: Expected notfound error, got pods "node2mirrorpod" is forbidden: node "node1" can only delete pods with spec.nodeName set to itself
wait.go:279: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:373: unexpected response, will retry: pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
node_test.go:566: Expected notfound error, got pods "node2mirrorpod" is forbidden: node node1 can only evict pods with spec.nodeName set to itself
wait.go:279: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2mirrorpod" already exists
node_test.go:578: Expected no error, got object is being deleted: pods "node2mirrorpod" already exists
wait.go:279: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
wait.go:373: unexpected response, will retry: object is being deleted: pods "node2normalpod" already exists
node_test.go:588: Expected no error, got object is being deleted: pods "node2normalpod" already exists
				from junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-220628.xml

Filter through log files | View test history on testgrid


Show 606 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 10 lines ...
I0111 21:52:35.973] process 229 exited with code 0 after 0.0m
I0111 21:52:35.974] Call:  gcloud config get-value account
I0111 21:52:36.399] process 241 exited with code 0 after 0.0m
I0111 21:52:36.400] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 21:52:36.400] Call:  kubectl get -oyaml pods/00dbe503-15eb-11e9-829f-0a580a6c0288
W0111 21:52:38.202] The connection to the server localhost:8080 was refused - did you specify the right host or port?
E0111 21:52:38.207] Command failed
I0111 21:52:38.207] process 253 exited with code 1 after 0.0m
E0111 21:52:38.207] unable to upload podspecs: Command '['kubectl', 'get', '-oyaml', 'pods/00dbe503-15eb-11e9-829f-0a580a6c0288']' returned non-zero exit status 1
I0111 21:52:38.208] Root: /workspace
I0111 21:52:38.208] cd to /workspace
I0111 21:52:38.208] Checkout: /workspace/k8s.io/kubernetes master:08bee2cc8453c50c6d632634e9ceffe05bf8d4ba,72730:7dfa408301791b8aadd6f529408c8a3f9e9b45c7 to /workspace/k8s.io/kubernetes
I0111 21:52:38.208] Call:  git init k8s.io/kubernetes
... skipping 801 lines ...
W0111 22:01:27.939] I0111 22:01:27.939071   56042 leaderelection.go:220] successfully acquired lease kube-system/kube-controller-manager
W0111 22:01:27.940] I0111 22:01:27.939347   56042 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"71ed0886-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"148", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 06e8befc4567_71ebe107-15ec-11e9-8492-0242ac110002 became leader
W0111 22:01:27.995] I0111 22:01:27.994722   56042 plugins.go:103] No cloud provider specified.
W0111 22:01:27.995] W0111 22:01:27.994790   56042 controllermanager.go:536] "serviceaccount-token" is disabled because there is no private key
W0111 22:01:27.995] W0111 22:01:27.994801   56042 controllermanager.go:495] "tokencleaner" is disabled
W0111 22:01:27.996] I0111 22:01:27.995154   56042 node_lifecycle_controller.go:77] Sending events to api server
W0111 22:01:27.996] E0111 22:01:27.995204   56042 core.go:159] failed to start cloud node lifecycle controller: no cloud provider provided
W0111 22:01:27.996] W0111 22:01:27.995230   56042 controllermanager.go:508] Skipping "cloudnodelifecycle"
W0111 22:01:27.996] W0111 22:01:27.995240   56042 controllermanager.go:508] Skipping "root-ca-cert-publisher"
W0111 22:01:27.996] I0111 22:01:27.995455   56042 controllermanager.go:516] Started "podgc"
W0111 22:01:27.996] I0111 22:01:27.995585   56042 gc_controller.go:76] Starting GC controller
W0111 22:01:27.996] I0111 22:01:27.995601   56042 controller_utils.go:1021] Waiting for caches to sync for GC controller
I0111 22:01:28.097] +++ [0111 22:01:27] On try 3, controller-manager: ok
... skipping 21 lines ...
W0111 22:01:28.376] I0111 22:01:28.107376   56042 controller_utils.go:1021] Waiting for caches to sync for ReplicaSet controller
W0111 22:01:28.376] I0111 22:01:28.107927   56042 controllermanager.go:516] Started "statefulset"
W0111 22:01:28.376] I0111 22:01:28.108137   56042 controllermanager.go:516] Started "csrcleaner"
W0111 22:01:28.376] I0111 22:01:28.108153   56042 stateful_set.go:151] Starting stateful set controller
W0111 22:01:28.376] I0111 22:01:28.108233   56042 controller_utils.go:1021] Waiting for caches to sync for stateful set controller
W0111 22:01:28.376] I0111 22:01:28.108265   56042 cleaner.go:81] Starting CSR cleaner controller
W0111 22:01:28.377] E0111 22:01:28.108767   56042 core.go:77] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0111 22:01:28.377] W0111 22:01:28.108788   56042 controllermanager.go:508] Skipping "service"
W0111 22:01:28.377] I0111 22:01:28.109290   56042 controllermanager.go:516] Started "serviceaccount"
W0111 22:01:28.377] I0111 22:01:28.109329   56042 serviceaccounts_controller.go:115] Starting service account controller
W0111 22:01:28.377] I0111 22:01:28.109343   56042 controller_utils.go:1021] Waiting for caches to sync for service account controller
W0111 22:01:28.377] I0111 22:01:28.110000   56042 controllermanager.go:516] Started "deployment"
W0111 22:01:28.377] I0111 22:01:28.110069   56042 deployment_controller.go:152] Starting deployment controller
... skipping 20 lines ...
W0111 22:01:28.380] I0111 22:01:28.164770   56042 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0111 22:01:28.380] I0111 22:01:28.164814   56042 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W0111 22:01:28.380] I0111 22:01:28.164916   56042 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
W0111 22:01:28.380] I0111 22:01:28.164963   56042 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
W0111 22:01:28.380] I0111 22:01:28.164989   56042 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W0111 22:01:28.381] I0111 22:01:28.165016   56042 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0111 22:01:28.381] E0111 22:01:28.165059   56042 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 22:01:28.381] I0111 22:01:28.165084   56042 controllermanager.go:516] Started "resourcequota"
W0111 22:01:28.381] I0111 22:01:28.165912   56042 controllermanager.go:516] Started "csrapproving"
W0111 22:01:28.381] I0111 22:01:28.166214   56042 node_lifecycle_controller.go:261] Sending events to api server.
W0111 22:01:28.381] I0111 22:01:28.166457   56042 node_lifecycle_controller.go:294] Controller is using taint based evictions.
W0111 22:01:28.381] I0111 22:01:28.166566   56042 taint_manager.go:175] Sending events to api server.
W0111 22:01:28.381] I0111 22:01:28.167148   56042 node_lifecycle_controller.go:360] Controller will taint node by condition.
... skipping 39 lines ...
W0111 22:01:28.386] I0111 22:01:28.186099   56042 ttl_controller.go:116] Starting TTL controller
W0111 22:01:28.386] I0111 22:01:28.186118   56042 controller_utils.go:1021] Waiting for caches to sync for TTL controller
W0111 22:01:28.386] I0111 22:01:28.186165   56042 daemon_controller.go:267] Starting daemon sets controller
W0111 22:01:28.386] I0111 22:01:28.186174   56042 controller_utils.go:1021] Waiting for caches to sync for daemon sets controller
W0111 22:01:28.386] I0111 22:01:28.186580   56042 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0111 22:01:28.386] I0111 22:01:28.186680   56042 controller_utils.go:1021] Waiting for caches to sync for ClusterRoleAggregator controller
W0111 22:01:28.386] W0111 22:01:28.214436   56042 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0111 22:01:28.387] I0111 22:01:28.273439   56042 controller_utils.go:1028] Caches are synced for certificate controller
W0111 22:01:28.387] I0111 22:01:28.280700   56042 controller_utils.go:1028] Caches are synced for namespace controller
W0111 22:01:28.387] I0111 22:01:28.286373   56042 controller_utils.go:1028] Caches are synced for TTL controller
W0111 22:01:28.387] I0111 22:01:28.301847   56042 controller_utils.go:1028] Caches are synced for GC controller
W0111 22:01:28.387] I0111 22:01:28.307363   56042 controller_utils.go:1028] Caches are synced for endpoint controller
W0111 22:01:28.387] I0111 22:01:28.307768   56042 controller_utils.go:1028] Caches are synced for ReplicaSet controller
... skipping 37 lines ...
I0111 22:01:28.873]   "buildDate": "2019-01-11T21:59:48Z",
I0111 22:01:28.873]   "goVersion": "go1.11.4",
I0111 22:01:28.873]   "compiler": "gc",
I0111 22:01:28.873]   "platform": "linux/amd64"
I0111 22:01:28.982] }+++ [0111 22:01:28] Testing kubectl version: check client only output matches expected output
W0111 22:01:29.083] I0111 22:01:28.887017   56042 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0111 22:01:29.083] E0111 22:01:28.896987   56042 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0111 22:01:29.084] E0111 22:01:28.897299   56042 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0111 22:01:29.084] I0111 22:01:28.904839   56042 controller_utils.go:1028] Caches are synced for garbage collector controller
W0111 22:01:29.084] I0111 22:01:28.904875   56042 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W0111 22:01:29.084] E0111 22:01:28.904953   56042 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0111 22:01:29.185] Successful: the flag '--client' shows correct client info
I0111 22:01:29.185] (BSuccessful: the flag '--client' correctly has no server version info
I0111 22:01:29.185] (B+++ [0111 22:01:29] Testing kubectl version: verify json output
I0111 22:01:29.298] Successful: --output json has correct client info
I0111 22:01:29.305] (BSuccessful: --output json has correct server info
I0111 22:01:29.309] (B+++ [0111 22:01:29] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0111 22:01:29.453] Successful: --client --output json has correct client info
I0111 22:01:29.460] (BSuccessful: --client --output json has no server info
I0111 22:01:29.463] (B+++ [0111 22:01:29] Testing kubectl version: compare json output using additional --short flag
W0111 22:01:29.598] I0111 22:01:29.597714   56042 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 22:01:29.698] I0111 22:01:29.698110   56042 controller_utils.go:1028] Caches are synced for garbage collector controller
W0111 22:01:29.715] E0111 22:01:29.714291   56042 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0111 22:01:29.815] Successful: --short --output client json info is equal to non short result
I0111 22:01:29.816] (BSuccessful: --short --output server json info is equal to non short result
I0111 22:01:29.816] (B+++ [0111 22:01:29] Testing kubectl version: compare json output with yaml output
I0111 22:01:29.816] Successful: --output json/yaml has identical information
I0111 22:01:29.816] (B+++ exit code: 0
I0111 22:01:29.816] Recording: run_kubectl_config_set_tests
... skipping 42 lines ...
I0111 22:01:32.334] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:01:32.336] +++ command: run_RESTMapper_evaluation_tests
I0111 22:01:32.347] +++ [0111 22:01:32] Creating namespace namespace-1547244092-21060
I0111 22:01:32.415] namespace/namespace-1547244092-21060 created
I0111 22:01:32.481] Context "test" modified.
I0111 22:01:32.487] +++ [0111 22:01:32] Testing RESTMapper
I0111 22:01:32.599] +++ [0111 22:01:32] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0111 22:01:32.614] +++ exit code: 0
I0111 22:01:32.746] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0111 22:01:32.746] bindings                                                                      true         Binding
I0111 22:01:32.746] componentstatuses                 cs                                          false        ComponentStatus
I0111 22:01:32.747] configmaps                        cm                                          true         ConfigMap
I0111 22:01:32.747] endpoints                         ep                                          true         Endpoints
... skipping 606 lines ...
I0111 22:01:52.457] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0111 22:01:52.547] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0111 22:01:52.618] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0111 22:01:52.711] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0111 22:01:52.859] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:01:53.033] (Bpod/env-test-pod created
W0111 22:01:53.133] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0111 22:01:53.134] error: setting 'all' parameter but found a non empty selector. 
W0111 22:01:53.134] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 22:01:53.134] I0111 22:01:52.131126   52691 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0111 22:01:53.134] error: min-available and max-unavailable cannot be both specified
I0111 22:01:53.235] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0111 22:01:53.235] Name:               env-test-pod
I0111 22:01:53.235] Namespace:          test-kubectl-describe-pod
I0111 22:01:53.235] Priority:           0
I0111 22:01:53.235] PriorityClassName:  <none>
I0111 22:01:53.236] Node:               <none>
... skipping 145 lines ...
W0111 22:02:05.312] I0111 22:02:04.099695   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244119-23400", Name:"modified", UID:"8779ac68-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-zgk25
W0111 22:02:05.312] I0111 22:02:04.842427   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244119-23400", Name:"modified", UID:"87ebe97d-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-gprll
I0111 22:02:05.499] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:05.659] (Bpod/valid-pod created
I0111 22:02:05.767] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:02:05.932] (BSuccessful
I0111 22:02:05.933] message:Error from server: cannot restore map from string
I0111 22:02:05.933] has:cannot restore map from string
I0111 22:02:06.025] Successful
I0111 22:02:06.026] message:pod/valid-pod patched (no change)
I0111 22:02:06.026] has:patched (no change)
I0111 22:02:06.117] pod/valid-pod patched
I0111 22:02:06.218] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 4 lines ...
I0111 22:02:06.679] core.sh:465: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 22:02:06.761] (Bpod/valid-pod patched
I0111 22:02:06.856] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0111 22:02:06.934] (Bpod/valid-pod patched
I0111 22:02:07.032] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0111 22:02:07.202] (Bpod/valid-pod patched
W0111 22:02:07.303] E0111 22:02:05.924695   52691 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0111 22:02:07.404] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0111 22:02:07.502] (B+++ [0111 22:02:07] "kubectl patch with resourceVersion 492" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0111 22:02:07.799] pod "valid-pod" deleted
I0111 22:02:07.810] pod/valid-pod replaced
I0111 22:02:07.919] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0111 22:02:08.093] (BSuccessful
I0111 22:02:08.093] message:error: --grace-period must have --force specified
I0111 22:02:08.093] has:\-\-grace-period must have \-\-force specified
I0111 22:02:08.267] Successful
I0111 22:02:08.268] message:error: --timeout must have --force specified
I0111 22:02:08.268] has:\-\-timeout must have \-\-force specified
W0111 22:02:08.430] W0111 22:02:08.429794   56042 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0111 22:02:08.530] node/node-v1-test created
I0111 22:02:08.601] node/node-v1-test replaced
W0111 22:02:08.702] I0111 22:02:08.673369   56042 event.go:221] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"8a0fe9a6-15ec-11e9-a03c-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
I0111 22:02:08.802] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0111 22:02:08.803] (Bnode "node-v1-test" deleted
I0111 22:02:08.888] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 38 lines ...
I0111 22:02:12.003] core.sh:628: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:12.158] (Bpod/test-pod created
W0111 22:02:12.259] Edit cancelled, no changes made.
W0111 22:02:12.259] Edit cancelled, no changes made.
W0111 22:02:12.259] Edit cancelled, no changes made.
W0111 22:02:12.259] Edit cancelled, no changes made.
W0111 22:02:12.259] error: 'name' already has a value (valid-pod), and --overwrite is false
W0111 22:02:12.260] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 22:02:12.360] core.sh:632: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0111 22:02:12.570] (Bpod/test-pod replaced
I0111 22:02:12.673] core.sh:640: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-replaced
I0111 22:02:12.911] (Bpod/test-pod configured
I0111 22:02:13.007] core.sh:647: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-applied
... skipping 63 lines ...
I0111 22:02:17.643] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0111 22:02:17.646] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:02:17.648] +++ command: run_kubectl_create_error_tests
I0111 22:02:17.658] +++ [0111 22:02:17] Creating namespace namespace-1547244137-16370
I0111 22:02:17.734] namespace/namespace-1547244137-16370 created
I0111 22:02:17.808] Context "test" modified.
I0111 22:02:17.814] +++ [0111 22:02:17] Testing kubectl create with error
W0111 22:02:17.915] Error: required flag(s) "filename" not set
W0111 22:02:17.915] 
W0111 22:02:17.915] 
W0111 22:02:17.915] Examples:
W0111 22:02:17.915]   # Create a pod using the data in pod.json.
W0111 22:02:17.915]   kubectl create -f ./pod.json
W0111 22:02:17.916]   
... skipping 38 lines ...
W0111 22:02:17.921]   kubectl create -f FILENAME [options]
W0111 22:02:17.921] 
W0111 22:02:17.922] Use "kubectl <command> --help" for more information about a given command.
W0111 22:02:17.922] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0111 22:02:17.922] 
W0111 22:02:17.922] required flag(s) "filename" not set
I0111 22:02:18.060] +++ [0111 22:02:18] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0111 22:02:18.161] kubectl convert is DEPRECATED and will be removed in a future version.
W0111 22:02:18.161] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 22:02:18.262] +++ exit code: 0
I0111 22:02:18.271] Recording: run_kubectl_apply_tests
I0111 22:02:18.271] Running command: run_kubectl_apply_tests
I0111 22:02:18.289] 
... skipping 17 lines ...
I0111 22:02:19.411] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0111 22:02:20.219] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0111 22:02:20.317] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:20.472] (Bpod/selector-test-pod created
I0111 22:02:20.574] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 22:02:20.661] (BSuccessful
I0111 22:02:20.661] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 22:02:20.661] has:pods "selector-test-pod-dont-apply" not found
I0111 22:02:20.741] pod "selector-test-pod" deleted
I0111 22:02:20.838] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:21.069] (Bpod/test-pod created (server dry run)
I0111 22:02:21.174] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:21.337] (Bpod/test-pod created
... skipping 8 lines ...
W0111 22:02:22.288] I0111 22:02:22.287537   52691 clientconn.go:551] parsed scheme: ""
W0111 22:02:22.288] I0111 22:02:22.287567   52691 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 22:02:22.288] I0111 22:02:22.287602   52691 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 22:02:22.289] I0111 22:02:22.287677   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:02:22.289] I0111 22:02:22.288312   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:02:22.294] I0111 22:02:22.293993   52691 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0111 22:02:22.387] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0111 22:02:22.488] kind.mygroup.example.com/myobj created (server dry run)
I0111 22:02:22.488] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0111 22:02:22.583] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:22.755] (Bpod/a created
I0111 22:02:24.063] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0111 22:02:24.151] (BSuccessful
I0111 22:02:24.151] message:Error from server (NotFound): pods "b" not found
I0111 22:02:24.152] has:pods "b" not found
I0111 22:02:24.312] pod/b created
I0111 22:02:24.326] pod/a pruned
I0111 22:02:25.821] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0111 22:02:25.909] (BSuccessful
I0111 22:02:25.909] message:Error from server (NotFound): pods "a" not found
I0111 22:02:25.910] has:pods "a" not found
I0111 22:02:25.988] pod "b" deleted
I0111 22:02:26.086] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:26.249] (Bpod/a created
I0111 22:02:26.349] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0111 22:02:26.435] (BSuccessful
I0111 22:02:26.435] message:Error from server (NotFound): pods "b" not found
I0111 22:02:26.436] has:pods "b" not found
I0111 22:02:26.599] pod/b created
I0111 22:02:26.704] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0111 22:02:26.800] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0111 22:02:26.883] (Bpod "a" deleted
I0111 22:02:26.889] pod "b" deleted
I0111 22:02:27.060] Successful
I0111 22:02:27.061] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0111 22:02:27.061] has:all resources selected for prune without explicitly passing --all
I0111 22:02:27.224] pod/a created
I0111 22:02:27.232] pod/b created
I0111 22:02:27.242] service/prune-svc created
I0111 22:02:28.551] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0111 22:02:28.655] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 129 lines ...
I0111 22:02:40.483] Context "test" modified.
I0111 22:02:40.490] +++ [0111 22:02:40] Testing kubectl create filter
I0111 22:02:40.584] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:40.747] (Bpod/selector-test-pod created
I0111 22:02:40.849] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0111 22:02:40.941] (BSuccessful
I0111 22:02:40.942] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0111 22:02:40.942] has:pods "selector-test-pod-dont-apply" not found
I0111 22:02:41.021] pod "selector-test-pod" deleted
I0111 22:02:41.041] +++ exit code: 0
I0111 22:02:41.076] Recording: run_kubectl_apply_deployments_tests
I0111 22:02:41.076] Running command: run_kubectl_apply_deployments_tests
I0111 22:02:41.099] 
... skipping 41 lines ...
I0111 22:02:43.171] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:43.260] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:43.351] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:02:43.518] (Bdeployment.extensions/nginx created
I0111 22:02:43.622] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0111 22:02:47.847] (BSuccessful
I0111 22:02:47.848] message:Error from server (Conflict): error when applying patch:
I0111 22:02:47.848] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547244161-29198\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0111 22:02:47.848] to:
I0111 22:02:47.848] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0111 22:02:47.848] Name: "nginx", Namespace: "namespace-1547244161-29198"
I0111 22:02:47.849] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1547244161-29198/deployments/nginx" "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1547244161-29198" "uid":"9ef9d3a4-15ec-11e9-a03c-0242ac110002" "resourceVersion":"712" "generation":'\x01' "creationTimestamp":"2019-01-11T22:02:43Z" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1547244161-29198\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n" "deployment.kubernetes.io/revision":"1"]] "spec":map["template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]]]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]]] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2019-01-11T22:02:43Z" "lastTransitionTime":"2019-01-11T22:02:43Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]]]]}
I0111 22:02:47.850] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0111 22:02:47.850] has:Error from server (Conflict)
W0111 22:02:47.950] I0111 22:02:43.521361   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244161-29198", Name:"nginx", UID:"9ef9d3a4-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"699", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5d56d6b95f to 3
W0111 22:02:47.951] I0111 22:02:43.524098   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244161-29198", Name:"nginx-5d56d6b95f", UID:"9efa6aa8-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-n4pxw
W0111 22:02:47.951] I0111 22:02:43.526463   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244161-29198", Name:"nginx-5d56d6b95f", UID:"9efa6aa8-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-ggwwk
W0111 22:02:47.951] I0111 22:02:43.527899   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244161-29198", Name:"nginx-5d56d6b95f", UID:"9efa6aa8-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5d56d6b95f-978d9
W0111 22:02:52.104] I0111 22:02:52.103455   52691 controller.go:606] quota admission added evaluator for: replicasets.extensions
I0111 22:02:53.064] deployment.extensions/nginx configured
... skipping 146 lines ...
I0111 22:03:00.389] +++ [0111 22:03:00] Creating namespace namespace-1547244180-10578
I0111 22:03:00.459] namespace/namespace-1547244180-10578 created
I0111 22:03:00.531] Context "test" modified.
I0111 22:03:00.538] +++ [0111 22:03:00] Testing kubectl get
I0111 22:03:00.630] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:00.718] (BSuccessful
I0111 22:03:00.718] message:Error from server (NotFound): pods "abc" not found
I0111 22:03:00.718] has:pods "abc" not found
I0111 22:03:00.808] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:00.892] (BSuccessful
I0111 22:03:00.892] message:Error from server (NotFound): pods "abc" not found
I0111 22:03:00.892] has:pods "abc" not found
I0111 22:03:00.982] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:01.063] (BSuccessful
I0111 22:03:01.064] message:{
I0111 22:03:01.064]     "apiVersion": "v1",
I0111 22:03:01.064]     "items": [],
... skipping 23 lines ...
I0111 22:03:01.414] has not:No resources found
I0111 22:03:01.497] Successful
I0111 22:03:01.498] message:NAME
I0111 22:03:01.498] has not:No resources found
I0111 22:03:01.588] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:01.703] (BSuccessful
I0111 22:03:01.703] message:error: the server doesn't have a resource type "foobar"
I0111 22:03:01.703] has not:No resources found
I0111 22:03:01.789] Successful
I0111 22:03:01.790] message:No resources found.
I0111 22:03:01.790] has:No resources found
I0111 22:03:01.873] Successful
I0111 22:03:01.874] message:
I0111 22:03:01.874] has not:No resources found
I0111 22:03:01.960] Successful
I0111 22:03:01.960] message:No resources found.
I0111 22:03:01.960] has:No resources found
I0111 22:03:02.050] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:02.138] (BSuccessful
I0111 22:03:02.138] message:Error from server (NotFound): pods "abc" not found
I0111 22:03:02.138] has:pods "abc" not found
I0111 22:03:02.140] FAIL!
I0111 22:03:02.140] message:Error from server (NotFound): pods "abc" not found
I0111 22:03:02.140] has not:List
I0111 22:03:02.140] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0111 22:03:02.261] Successful
I0111 22:03:02.261] message:I0111 22:03:02.206855   68189 loader.go:359] Config loaded from file /tmp/tmp.KI1z8Ncr3N/.kube/config
I0111 22:03:02.261] I0111 22:03:02.207337   68189 loader.go:359] Config loaded from file /tmp/tmp.KI1z8Ncr3N/.kube/config
I0111 22:03:02.261] I0111 22:03:02.208647   68189 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
... skipping 995 lines ...
I0111 22:03:05.867] }
I0111 22:03:05.970] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:03:06.258] (B<no value>Successful
I0111 22:03:06.258] message:valid-pod:
I0111 22:03:06.259] has:valid-pod:
I0111 22:03:06.353] Successful
I0111 22:03:06.354] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0111 22:03:06.354] 	template was:
I0111 22:03:06.354] 		{.missing}
I0111 22:03:06.354] 	object given to jsonpath engine was:
I0111 22:03:06.355] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1547244185-31762", "selfLink":"/api/v1/namespaces/namespace-1547244185-31762/pods/valid-pod", "uid":"ac3bc12f-15ec-11e9-a03c-0242ac110002", "resourceVersion":"809", "creationTimestamp":"2019-01-11T22:03:05Z", "labels":map[string]interface {}{"name":"valid-pod"}}, "spec":map[string]interface {}{"dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0111 22:03:06.355] has:missing is not found
I0111 22:03:06.451] Successful
I0111 22:03:06.451] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0111 22:03:06.451] 	template was:
I0111 22:03:06.451] 		{{.missing}}
I0111 22:03:06.452] 	raw data was:
I0111 22:03:06.452] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-01-11T22:03:05Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1547244185-31762","resourceVersion":"809","selfLink":"/api/v1/namespaces/namespace-1547244185-31762/pods/valid-pod","uid":"ac3bc12f-15ec-11e9-a03c-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0111 22:03:06.452] 	object given to template engine was:
I0111 22:03:06.453] 		map[apiVersion:v1 kind:Pod metadata:map[resourceVersion:809 selfLink:/api/v1/namespaces/namespace-1547244185-31762/pods/valid-pod uid:ac3bc12f-15ec-11e9-a03c-0242ac110002 creationTimestamp:2019-01-11T22:03:05Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1547244185-31762] spec:map[schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always] status:map[phase:Pending qosClass:Guaranteed]]
I0111 22:03:06.453] has:map has no entry for key "missing"
W0111 22:03:06.554] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0111 22:03:07.542] E0111 22:03:07.541683   68568 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0111 22:03:07.643] Successful
I0111 22:03:07.643] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 22:03:07.644] valid-pod   0/1     Pending   0          1s
I0111 22:03:07.644] has:STATUS
I0111 22:03:07.644] Successful
... skipping 80 lines ...
I0111 22:03:09.862]   terminationGracePeriodSeconds: 30
I0111 22:03:09.862] status:
I0111 22:03:09.862]   phase: Pending
I0111 22:03:09.862]   qosClass: Guaranteed
I0111 22:03:09.863] has:name: valid-pod
I0111 22:03:09.865] Successful
I0111 22:03:09.866] message:Error from server (NotFound): pods "invalid-pod" not found
I0111 22:03:09.866] has:"invalid-pod" not found
I0111 22:03:09.963] pod "valid-pod" deleted
I0111 22:03:10.081] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:10.248] (Bpod/redis-master created
I0111 22:03:10.252] pod/valid-pod created
I0111 22:03:10.347] Successful
... skipping 324 lines ...
I0111 22:03:14.740] Running command: run_create_secret_tests
I0111 22:03:14.764] 
I0111 22:03:14.766] +++ Running case: test-cmd.run_create_secret_tests 
I0111 22:03:14.769] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:03:14.772] +++ command: run_create_secret_tests
I0111 22:03:14.869] Successful
I0111 22:03:14.870] message:Error from server (NotFound): secrets "mysecret" not found
I0111 22:03:14.870] has:secrets "mysecret" not found
I0111 22:03:15.033] Successful
I0111 22:03:15.033] message:Error from server (NotFound): secrets "mysecret" not found
I0111 22:03:15.034] has:secrets "mysecret" not found
I0111 22:03:15.036] Successful
I0111 22:03:15.036] message:user-specified
I0111 22:03:15.036] has:user-specified
I0111 22:03:15.110] Successful
I0111 22:03:15.187] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"b1d9ee17-15ec-11e9-a03c-0242ac110002","resourceVersion":"884","creationTimestamp":"2019-01-11T22:03:15Z"}}
... skipping 80 lines ...
I0111 22:03:17.135] has:Timeout exceeded while reading body
I0111 22:03:17.215] Successful
I0111 22:03:17.216] message:NAME        READY   STATUS    RESTARTS   AGE
I0111 22:03:17.216] valid-pod   0/1     Pending   0          2s
I0111 22:03:17.216] has:valid-pod
I0111 22:03:17.286] Successful
I0111 22:03:17.287] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0111 22:03:17.287] has:Invalid timeout value
I0111 22:03:17.365] pod "valid-pod" deleted
I0111 22:03:17.388] +++ exit code: 0
I0111 22:03:17.425] Recording: run_crd_tests
I0111 22:03:17.425] Running command: run_crd_tests
I0111 22:03:17.447] 
... skipping 167 lines ...
W0111 22:03:22.064] I0111 22:03:20.201680   52691 controller.go:606] quota admission added evaluator for: foos.company.com
I0111 22:03:22.164] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0111 22:03:22.164] (Bfoo.company.com/test patched
I0111 22:03:22.263] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0111 22:03:22.350] (Bfoo.company.com/test patched
I0111 22:03:22.445] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0111 22:03:22.602] (B+++ [0111 22:03:22] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0111 22:03:22.668] {
I0111 22:03:22.668]     "apiVersion": "company.com/v1",
I0111 22:03:22.668]     "kind": "Foo",
I0111 22:03:22.668]     "metadata": {
I0111 22:03:22.668]         "annotations": {
I0111 22:03:22.668]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 112 lines ...
I0111 22:03:24.196] bar.company.com "test" deleted
W0111 22:03:24.297] I0111 22:03:23.922080   52691 controller.go:606] quota admission added evaluator for: bars.company.com
W0111 22:03:24.297] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 71065 Killed                  while [ ${tries} -lt 10 ]; do
W0111 22:03:24.297]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0111 22:03:24.297] done
W0111 22:03:24.298] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 71064 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0111 22:03:30.024] E0111 22:03:30.023383   56042 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
W0111 22:03:30.160] I0111 22:03:30.159920   56042 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 22:03:30.162] I0111 22:03:30.161777   52691 clientconn.go:551] parsed scheme: ""
W0111 22:03:30.162] I0111 22:03:30.161805   52691 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0111 22:03:30.162] I0111 22:03:30.161836   52691 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0111 22:03:30.162] I0111 22:03:30.161884   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:03:30.163] I0111 22:03:30.162298   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I0111 22:03:42.783] +++ [0111 22:03:42] Testing cmd with image
I0111 22:03:42.878] Successful
I0111 22:03:42.879] message:deployment.apps/test1 created
I0111 22:03:42.879] has:deployment.apps/test1 created
I0111 22:03:42.961] deployment.extensions "test1" deleted
I0111 22:03:43.038] Successful
I0111 22:03:43.039] message:error: Invalid image name "InvalidImageName": invalid reference format
I0111 22:03:43.039] has:error: Invalid image name "InvalidImageName": invalid reference format
I0111 22:03:43.055] +++ exit code: 0
I0111 22:03:43.093] Recording: run_recursive_resources_tests
I0111 22:03:43.094] Running command: run_recursive_resources_tests
I0111 22:03:43.117] 
I0111 22:03:43.119] +++ Running case: test-cmd.run_recursive_resources_tests 
I0111 22:03:43.122] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0111 22:03:43.290] Context "test" modified.
I0111 22:03:43.389] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:43.658] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:43.661] (BSuccessful
I0111 22:03:43.661] message:pod/busybox0 created
I0111 22:03:43.661] pod/busybox1 created
I0111 22:03:43.661] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 22:03:43.661] has:error validating data: kind not set
I0111 22:03:43.756] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:43.939] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0111 22:03:43.942] (BSuccessful
I0111 22:03:43.943] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:43.943] has:Object 'Kind' is missing
I0111 22:03:44.043] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:44.314] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 22:03:44.316] (BSuccessful
I0111 22:03:44.317] message:pod/busybox0 replaced
I0111 22:03:44.317] pod/busybox1 replaced
I0111 22:03:44.317] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 22:03:44.317] has:error validating data: kind not set
I0111 22:03:44.417] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:44.527] (BSuccessful
I0111 22:03:44.528] message:Name:               busybox0
I0111 22:03:44.528] Namespace:          namespace-1547244223-2979
I0111 22:03:44.528] Priority:           0
I0111 22:03:44.528] PriorityClassName:  <none>
... skipping 159 lines ...
I0111 22:03:44.547] has:Object 'Kind' is missing
I0111 22:03:44.631] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:44.844] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0111 22:03:44.846] (BSuccessful
I0111 22:03:44.847] message:pod/busybox0 annotated
I0111 22:03:44.847] pod/busybox1 annotated
I0111 22:03:44.847] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:44.847] has:Object 'Kind' is missing
I0111 22:03:44.950] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:45.212] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0111 22:03:45.214] (BSuccessful
I0111 22:03:45.215] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 22:03:45.215] pod/busybox0 configured
I0111 22:03:45.215] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0111 22:03:45.215] pod/busybox1 configured
I0111 22:03:45.215] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0111 22:03:45.215] has:error validating data: kind not set
I0111 22:03:45.299] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:45.443] (Bdeployment.apps/nginx created
I0111 22:03:45.543] generic-resources.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0111 22:03:45.639] (Bgeneric-resources.sh:269: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 22:03:45.824] (Bgeneric-resources.sh:273: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0111 22:03:45.826] (BSuccessful
... skipping 37 lines ...
I0111 22:03:45.831]       schedulerName: default-scheduler
I0111 22:03:45.831]       securityContext: {}
I0111 22:03:45.831]       terminationGracePeriodSeconds: 30
I0111 22:03:45.831] status: {}
I0111 22:03:45.831] has:apps/v1
I0111 22:03:45.918] deployment.extensions "nginx" deleted
W0111 22:03:46.019] Error from server (NotFound): namespaces "non-native-resources" not found
W0111 22:03:46.019] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0111 22:03:46.019] I0111 22:03:42.867545   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244222-3473", Name:"test1", UID:"c2592ff3-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"992", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-fb488bd5d to 1
W0111 22:03:46.020] I0111 22:03:42.872363   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244222-3473", Name:"test1-fb488bd5d", UID:"c259d513-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"993", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-fb488bd5d-8t72r
W0111 22:03:46.020] I0111 22:03:45.446977   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244223-2979", Name:"nginx", UID:"c3e3085b-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6f6bb85d9c to 3
W0111 22:03:46.020] I0111 22:03:45.451147   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244223-2979", Name:"nginx-6f6bb85d9c", UID:"c3e39093-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-vxq5d
W0111 22:03:46.021] I0111 22:03:45.453752   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244223-2979", Name:"nginx-6f6bb85d9c", UID:"c3e39093-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6f6bb85d9c-hm4hr
... skipping 2 lines ...
W0111 22:03:46.021] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 22:03:46.122] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:46.204] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:46.206] (BSuccessful
I0111 22:03:46.206] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0111 22:03:46.206] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0111 22:03:46.207] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:46.207] has:Object 'Kind' is missing
I0111 22:03:46.303] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:46.391] (BSuccessful
I0111 22:03:46.391] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:46.392] has:busybox0:busybox1:
I0111 22:03:46.393] Successful
I0111 22:03:46.394] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:46.394] has:Object 'Kind' is missing
I0111 22:03:46.486] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:46.572] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:46.664] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0111 22:03:46.666] (BSuccessful
I0111 22:03:46.666] message:pod/busybox0 labeled
I0111 22:03:46.666] pod/busybox1 labeled
I0111 22:03:46.667] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:46.667] has:Object 'Kind' is missing
I0111 22:03:46.757] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:46.847] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:46.943] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0111 22:03:46.945] (BSuccessful
I0111 22:03:46.945] message:pod/busybox0 patched
I0111 22:03:46.945] pod/busybox1 patched
I0111 22:03:46.945] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:46.946] has:Object 'Kind' is missing
I0111 22:03:47.041] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:47.245] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:47.247] (BSuccessful
I0111 22:03:47.247] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 22:03:47.247] pod "busybox0" force deleted
I0111 22:03:47.247] pod "busybox1" force deleted
I0111 22:03:47.248] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0111 22:03:47.248] has:Object 'Kind' is missing
I0111 22:03:47.344] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:47.513] (Breplicationcontroller/busybox0 created
I0111 22:03:47.517] replicationcontroller/busybox1 created
W0111 22:03:47.618] I0111 22:03:46.943832   56042 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0111 22:03:47.618] I0111 22:03:47.516997   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244223-2979", Name:"busybox0", UID:"c51edc61-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-5jm6x
W0111 22:03:47.619] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 22:03:47.619] I0111 22:03:47.520601   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244223-2979", Name:"busybox1", UID:"c51f8ec5-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1051", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-n648v
I0111 22:03:47.719] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:47.737] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:47.826] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 22:03:47.925] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 22:03:48.128] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 22:03:48.223] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0111 22:03:48.226] (BSuccessful
I0111 22:03:48.226] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0111 22:03:48.226] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0111 22:03:48.226] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:48.227] has:Object 'Kind' is missing
I0111 22:03:48.310] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0111 22:03:48.403] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0111 22:03:48.521] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:48.621] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 22:03:48.720] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 22:03:48.944] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 22:03:49.046] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0111 22:03:49.048] (BSuccessful
I0111 22:03:49.048] message:service/busybox0 exposed
I0111 22:03:49.048] service/busybox1 exposed
I0111 22:03:49.049] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:49.049] has:Object 'Kind' is missing
I0111 22:03:49.152] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:49.245] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0111 22:03:49.328] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0111 22:03:49.517] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0111 22:03:49.605] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0111 22:03:49.607] (BSuccessful
I0111 22:03:49.608] message:replicationcontroller/busybox0 scaled
I0111 22:03:49.608] replicationcontroller/busybox1 scaled
I0111 22:03:49.608] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:49.608] has:Object 'Kind' is missing
I0111 22:03:49.706] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:49.890] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:49.893] (BSuccessful
I0111 22:03:49.893] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 22:03:49.893] replicationcontroller "busybox0" force deleted
I0111 22:03:49.894] replicationcontroller "busybox1" force deleted
I0111 22:03:49.894] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:49.894] has:Object 'Kind' is missing
I0111 22:03:49.985] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:50.156] (Bdeployment.apps/nginx1-deployment created
I0111 22:03:50.161] deployment.apps/nginx0-deployment created
W0111 22:03:50.262] I0111 22:03:49.417551   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244223-2979", Name:"busybox0", UID:"c51edc61-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-jgjzp
W0111 22:03:50.263] I0111 22:03:49.425578   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244223-2979", Name:"busybox1", UID:"c51f8ec5-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-bj966
W0111 22:03:50.263] I0111 22:03:50.159891   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244223-2979", Name:"nginx1-deployment", UID:"c6b20849-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-75f6fc6747 to 2
W0111 22:03:50.264] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 22:03:50.264] I0111 22:03:50.173962   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244223-2979", Name:"nginx1-deployment-75f6fc6747", UID:"c6b2ab06-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1091", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-nkq2k
W0111 22:03:50.264] I0111 22:03:50.177044   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244223-2979", Name:"nginx0-deployment", UID:"c6b2c6ae-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1092", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-b6bb4ccbb to 2
W0111 22:03:50.265] I0111 22:03:50.178830   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244223-2979", Name:"nginx1-deployment-75f6fc6747", UID:"c6b2ab06-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1091", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-75f6fc6747-qhbxn
W0111 22:03:50.265] I0111 22:03:50.181822   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244223-2979", Name:"nginx0-deployment-b6bb4ccbb", UID:"c6b53224-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-7z7cs
W0111 22:03:50.265] I0111 22:03:50.184638   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244223-2979", Name:"nginx0-deployment-b6bb4ccbb", UID:"c6b53224-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-b6bb4ccbb-k65t2
I0111 22:03:50.366] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0111 22:03:50.378] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 22:03:50.591] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0111 22:03:50.593] (BSuccessful
I0111 22:03:50.593] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0111 22:03:50.593] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0111 22:03:50.594] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:03:50.594] has:Object 'Kind' is missing
I0111 22:03:50.696] deployment.apps/nginx1-deployment paused
I0111 22:03:50.701] deployment.apps/nginx0-deployment paused
I0111 22:03:50.815] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0111 22:03:50.817] (BSuccessful
I0111 22:03:50.817] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0111 22:03:51.123] 1         <none>
I0111 22:03:51.123] 
I0111 22:03:51.123] deployment.apps/nginx0-deployment 
I0111 22:03:51.123] REVISION  CHANGE-CAUSE
I0111 22:03:51.124] 1         <none>
I0111 22:03:51.124] 
I0111 22:03:51.124] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:03:51.124] has:nginx0-deployment
I0111 22:03:51.126] Successful
I0111 22:03:51.126] message:deployment.apps/nginx1-deployment 
I0111 22:03:51.126] REVISION  CHANGE-CAUSE
I0111 22:03:51.126] 1         <none>
I0111 22:03:51.126] 
I0111 22:03:51.127] deployment.apps/nginx0-deployment 
I0111 22:03:51.127] REVISION  CHANGE-CAUSE
I0111 22:03:51.127] 1         <none>
I0111 22:03:51.127] 
I0111 22:03:51.127] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:03:51.127] has:nginx1-deployment
I0111 22:03:51.128] Successful
I0111 22:03:51.128] message:deployment.apps/nginx1-deployment 
I0111 22:03:51.128] REVISION  CHANGE-CAUSE
I0111 22:03:51.129] 1         <none>
I0111 22:03:51.129] 
I0111 22:03:51.129] deployment.apps/nginx0-deployment 
I0111 22:03:51.129] REVISION  CHANGE-CAUSE
I0111 22:03:51.129] 1         <none>
I0111 22:03:51.129] 
I0111 22:03:51.130] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:03:51.130] has:Object 'Kind' is missing
I0111 22:03:51.218] deployment.apps "nginx1-deployment" force deleted
I0111 22:03:51.223] deployment.apps "nginx0-deployment" force deleted
W0111 22:03:51.324] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 22:03:51.324] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0111 22:03:52.323] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:03:52.483] (Breplicationcontroller/busybox0 created
I0111 22:03:52.487] replicationcontroller/busybox1 created
W0111 22:03:52.588] I0111 22:03:52.486732   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244223-2979", Name:"busybox0", UID:"c8153bf8-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-hftsx
W0111 22:03:52.588] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0111 22:03:52.589] I0111 22:03:52.490722   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244223-2979", Name:"busybox1", UID:"c815ef8d-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1142", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-68k2w
I0111 22:03:52.689] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0111 22:03:52.718] (BSuccessful
I0111 22:03:52.719] message:no rollbacker has been implemented for "ReplicationController"
I0111 22:03:52.719] no rollbacker has been implemented for "ReplicationController"
I0111 22:03:52.719] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0111 22:03:52.720] message:no rollbacker has been implemented for "ReplicationController"
I0111 22:03:52.720] no rollbacker has been implemented for "ReplicationController"
I0111 22:03:52.721] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:52.721] has:Object 'Kind' is missing
I0111 22:03:52.827] Successful
I0111 22:03:52.827] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:52.827] error: replicationcontrollers "busybox0" pausing is not supported
I0111 22:03:52.828] error: replicationcontrollers "busybox1" pausing is not supported
I0111 22:03:52.828] has:Object 'Kind' is missing
I0111 22:03:52.828] Successful
I0111 22:03:52.829] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:52.829] error: replicationcontrollers "busybox0" pausing is not supported
I0111 22:03:52.829] error: replicationcontrollers "busybox1" pausing is not supported
I0111 22:03:52.829] has:replicationcontrollers "busybox0" pausing is not supported
I0111 22:03:52.831] Successful
I0111 22:03:52.832] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:52.832] error: replicationcontrollers "busybox0" pausing is not supported
I0111 22:03:52.832] error: replicationcontrollers "busybox1" pausing is not supported
I0111 22:03:52.832] has:replicationcontrollers "busybox1" pausing is not supported
I0111 22:03:52.938] Successful
I0111 22:03:52.939] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:52.939] error: replicationcontrollers "busybox0" resuming is not supported
I0111 22:03:52.939] error: replicationcontrollers "busybox1" resuming is not supported
I0111 22:03:52.939] has:Object 'Kind' is missing
I0111 22:03:52.940] Successful
I0111 22:03:52.941] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:52.941] error: replicationcontrollers "busybox0" resuming is not supported
I0111 22:03:52.941] error: replicationcontrollers "busybox1" resuming is not supported
I0111 22:03:52.941] has:replicationcontrollers "busybox0" resuming is not supported
I0111 22:03:52.942] Successful
I0111 22:03:52.943] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:52.943] error: replicationcontrollers "busybox0" resuming is not supported
I0111 22:03:52.943] error: replicationcontrollers "busybox1" resuming is not supported
I0111 22:03:52.943] has:replicationcontrollers "busybox0" resuming is not supported
I0111 22:03:53.030] replicationcontroller "busybox0" force deleted
I0111 22:03:53.036] replicationcontroller "busybox1" force deleted
W0111 22:03:53.137] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0111 22:03:53.138] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0111 22:03:54.063] +++ exit code: 0
I0111 22:03:54.124] Recording: run_namespace_tests
I0111 22:03:54.125] Running command: run_namespace_tests
I0111 22:03:54.144] 
I0111 22:03:54.146] +++ Running case: test-cmd.run_namespace_tests 
I0111 22:03:54.149] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:03:54.151] +++ command: run_namespace_tests
I0111 22:03:54.158] +++ [0111 22:03:54] Testing kubectl(v1:namespaces)
I0111 22:03:54.236] namespace/my-namespace created
I0111 22:03:54.329] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0111 22:03:54.426] (Bnamespace "my-namespace" deleted
I0111 22:03:59.526] namespace/my-namespace condition met
I0111 22:03:59.616] Successful
I0111 22:03:59.616] message:Error from server (NotFound): namespaces "my-namespace" not found
I0111 22:03:59.616] has: not found
I0111 22:03:59.732] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0111 22:03:59.805] (Bnamespace/other created
I0111 22:03:59.914] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0111 22:04:00.010] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:04:00.170] (Bpod/valid-pod created
I0111 22:04:00.266] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:04:00.364] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:04:00.457] (BSuccessful
I0111 22:04:00.457] message:error: a resource cannot be retrieved by name across all namespaces
I0111 22:04:00.457] has:a resource cannot be retrieved by name across all namespaces
W0111 22:04:00.558] E0111 22:04:00.075416   56042 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0111 22:04:00.558] I0111 22:04:00.312597   56042 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0111 22:04:00.559] I0111 22:04:00.412900   56042 controller_utils.go:1028] Caches are synced for garbage collector controller
W0111 22:04:00.641] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0111 22:04:00.742] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0111 22:04:00.742] (Bpod "valid-pod" force deleted
I0111 22:04:00.753] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 118 lines ...
I0111 22:04:21.526] +++ command: run_client_config_tests
I0111 22:04:21.540] +++ [0111 22:04:21] Creating namespace namespace-1547244261-28636
I0111 22:04:21.610] namespace/namespace-1547244261-28636 created
I0111 22:04:21.681] Context "test" modified.
I0111 22:04:21.688] +++ [0111 22:04:21] Testing client config
I0111 22:04:21.760] Successful
I0111 22:04:21.760] message:error: stat missing: no such file or directory
I0111 22:04:21.760] has:missing: no such file or directory
I0111 22:04:21.833] Successful
I0111 22:04:21.833] message:error: stat missing: no such file or directory
I0111 22:04:21.833] has:missing: no such file or directory
I0111 22:04:21.902] Successful
I0111 22:04:21.902] message:error: stat missing: no such file or directory
I0111 22:04:21.902] has:missing: no such file or directory
I0111 22:04:21.973] Successful
I0111 22:04:21.974] message:Error in configuration: context was not found for specified context: missing-context
I0111 22:04:21.974] has:context was not found for specified context: missing-context
I0111 22:04:22.050] Successful
I0111 22:04:22.050] message:error: no server found for cluster "missing-cluster"
I0111 22:04:22.050] has:no server found for cluster "missing-cluster"
I0111 22:04:22.127] Successful
I0111 22:04:22.127] message:error: auth info "missing-user" does not exist
I0111 22:04:22.127] has:auth info "missing-user" does not exist
I0111 22:04:22.281] Successful
I0111 22:04:22.282] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0111 22:04:22.282] has:Error loading config file
I0111 22:04:22.355] Successful
I0111 22:04:22.356] message:error: stat missing-config: no such file or directory
I0111 22:04:22.356] has:no such file or directory
I0111 22:04:22.370] +++ exit code: 0
I0111 22:04:22.408] Recording: run_service_accounts_tests
I0111 22:04:22.408] Running command: run_service_accounts_tests
I0111 22:04:22.428] 
I0111 22:04:22.431] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0111 22:04:29.276] Labels:                        run=pi
I0111 22:04:29.276] Annotations:                   <none>
I0111 22:04:29.277] Schedule:                      59 23 31 2 *
I0111 22:04:29.277] Concurrency Policy:            Allow
I0111 22:04:29.277] Suspend:                       False
I0111 22:04:29.277] Successful Job History Limit:  824640928248
I0111 22:04:29.277] Failed Job History Limit:      1
I0111 22:04:29.277] Starting Deadline Seconds:     <unset>
I0111 22:04:29.277] Selector:                      <unset>
I0111 22:04:29.277] Parallelism:                   <unset>
I0111 22:04:29.277] Completions:                   <unset>
I0111 22:04:29.277] Pod Template:
I0111 22:04:29.277]   Labels:  run=pi
... skipping 31 lines ...
I0111 22:04:29.815]                 job-name=test-job
I0111 22:04:29.815]                 run=pi
I0111 22:04:29.815] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0111 22:04:29.815] Parallelism:    1
I0111 22:04:29.815] Completions:    1
I0111 22:04:29.815] Start Time:     Fri, 11 Jan 2019 22:04:29 +0000
I0111 22:04:29.815] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0111 22:04:29.815] Pod Template:
I0111 22:04:29.815]   Labels:  controller-uid=de2b32b3-15ec-11e9-a03c-0242ac110002
I0111 22:04:29.815]            job-name=test-job
I0111 22:04:29.815]            run=pi
I0111 22:04:29.815]   Containers:
I0111 22:04:29.815]    pi:
... skipping 329 lines ...
I0111 22:04:39.599]   selector:
I0111 22:04:39.600]     role: padawan
I0111 22:04:39.600]   sessionAffinity: None
I0111 22:04:39.600]   type: ClusterIP
I0111 22:04:39.600] status:
I0111 22:04:39.600]   loadBalancer: {}
W0111 22:04:39.700] error: you must specify resources by --filename when --local is set.
W0111 22:04:39.701] Example resource specifications include:
W0111 22:04:39.701]    '-f rsrc.yaml'
W0111 22:04:39.701]    '--filename=rsrc.json'
I0111 22:04:39.801] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0111 22:04:39.945] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0111 22:04:40.033] (Bservice "redis-master" deleted
... skipping 94 lines ...
I0111 22:04:46.134] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 22:04:46.234] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 22:04:46.336] (Bdaemonset.extensions/bind rolled back
I0111 22:04:46.438] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 22:04:46.536] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 22:04:46.643] (BSuccessful
I0111 22:04:46.644] message:error: unable to find specified revision 1000000 in history
I0111 22:04:46.644] has:unable to find specified revision
I0111 22:04:46.739] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 22:04:46.835] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 22:04:46.943] (Bdaemonset.extensions/bind rolled back
I0111 22:04:47.040] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0111 22:04:47.129] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0111 22:04:48.544] Namespace:    namespace-1547244287-392
I0111 22:04:48.544] Selector:     app=guestbook,tier=frontend
I0111 22:04:48.544] Labels:       app=guestbook
I0111 22:04:48.545]               tier=frontend
I0111 22:04:48.545] Annotations:  <none>
I0111 22:04:48.545] Replicas:     3 current / 3 desired
I0111 22:04:48.545] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:04:48.545] Pod Template:
I0111 22:04:48.545]   Labels:  app=guestbook
I0111 22:04:48.545]            tier=frontend
I0111 22:04:48.545]   Containers:
I0111 22:04:48.545]    php-redis:
I0111 22:04:48.546]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 22:04:48.671] Namespace:    namespace-1547244287-392
I0111 22:04:48.671] Selector:     app=guestbook,tier=frontend
I0111 22:04:48.671] Labels:       app=guestbook
I0111 22:04:48.672]               tier=frontend
I0111 22:04:48.672] Annotations:  <none>
I0111 22:04:48.672] Replicas:     3 current / 3 desired
I0111 22:04:48.672] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:04:48.672] Pod Template:
I0111 22:04:48.672]   Labels:  app=guestbook
I0111 22:04:48.672]            tier=frontend
I0111 22:04:48.672]   Containers:
I0111 22:04:48.672]    php-redis:
I0111 22:04:48.673]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 24 lines ...
I0111 22:04:48.878] Namespace:    namespace-1547244287-392
I0111 22:04:48.878] Selector:     app=guestbook,tier=frontend
I0111 22:04:48.878] Labels:       app=guestbook
I0111 22:04:48.878]               tier=frontend
I0111 22:04:48.878] Annotations:  <none>
I0111 22:04:48.878] Replicas:     3 current / 3 desired
I0111 22:04:48.879] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:04:48.879] Pod Template:
I0111 22:04:48.879]   Labels:  app=guestbook
I0111 22:04:48.879]            tier=frontend
I0111 22:04:48.879]   Containers:
I0111 22:04:48.879]    php-redis:
I0111 22:04:48.879]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0111 22:04:48.908] Namespace:    namespace-1547244287-392
I0111 22:04:48.908] Selector:     app=guestbook,tier=frontend
I0111 22:04:48.908] Labels:       app=guestbook
I0111 22:04:48.908]               tier=frontend
I0111 22:04:48.908] Annotations:  <none>
I0111 22:04:48.908] Replicas:     3 current / 3 desired
I0111 22:04:48.909] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:04:48.909] Pod Template:
I0111 22:04:48.909]   Labels:  app=guestbook
I0111 22:04:48.909]            tier=frontend
I0111 22:04:48.909]   Containers:
I0111 22:04:48.909]    php-redis:
I0111 22:04:48.909]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0111 22:04:49.058] Namespace:    namespace-1547244287-392
I0111 22:04:49.058] Selector:     app=guestbook,tier=frontend
I0111 22:04:49.058] Labels:       app=guestbook
I0111 22:04:49.058]               tier=frontend
I0111 22:04:49.058] Annotations:  <none>
I0111 22:04:49.058] Replicas:     3 current / 3 desired
I0111 22:04:49.059] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:04:49.059] Pod Template:
I0111 22:04:49.059]   Labels:  app=guestbook
I0111 22:04:49.059]            tier=frontend
I0111 22:04:49.059]   Containers:
I0111 22:04:49.059]    php-redis:
I0111 22:04:49.059]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 22:04:49.174] Namespace:    namespace-1547244287-392
I0111 22:04:49.174] Selector:     app=guestbook,tier=frontend
I0111 22:04:49.174] Labels:       app=guestbook
I0111 22:04:49.174]               tier=frontend
I0111 22:04:49.174] Annotations:  <none>
I0111 22:04:49.175] Replicas:     3 current / 3 desired
I0111 22:04:49.175] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:04:49.175] Pod Template:
I0111 22:04:49.175]   Labels:  app=guestbook
I0111 22:04:49.175]            tier=frontend
I0111 22:04:49.175]   Containers:
I0111 22:04:49.175]    php-redis:
I0111 22:04:49.175]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0111 22:04:49.289] Namespace:    namespace-1547244287-392
I0111 22:04:49.290] Selector:     app=guestbook,tier=frontend
I0111 22:04:49.290] Labels:       app=guestbook
I0111 22:04:49.290]               tier=frontend
I0111 22:04:49.290] Annotations:  <none>
I0111 22:04:49.290] Replicas:     3 current / 3 desired
I0111 22:04:49.290] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:04:49.290] Pod Template:
I0111 22:04:49.290]   Labels:  app=guestbook
I0111 22:04:49.290]            tier=frontend
I0111 22:04:49.291]   Containers:
I0111 22:04:49.291]    php-redis:
I0111 22:04:49.291]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0111 22:04:49.401] Namespace:    namespace-1547244287-392
I0111 22:04:49.401] Selector:     app=guestbook,tier=frontend
I0111 22:04:49.401] Labels:       app=guestbook
I0111 22:04:49.402]               tier=frontend
I0111 22:04:49.402] Annotations:  <none>
I0111 22:04:49.402] Replicas:     3 current / 3 desired
I0111 22:04:49.402] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:04:49.402] Pod Template:
I0111 22:04:49.402]   Labels:  app=guestbook
I0111 22:04:49.402]            tier=frontend
I0111 22:04:49.403]   Containers:
I0111 22:04:49.403]    php-redis:
I0111 22:04:49.403]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0111 22:04:50.257] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0111 22:04:50.348] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0111 22:04:50.441] (Breplicationcontroller/frontend scaled
I0111 22:04:50.541] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0111 22:04:50.625] (Breplicationcontroller "frontend" deleted
W0111 22:04:50.726] I0111 22:04:49.595863   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244287-392", Name:"frontend", UID:"e954dc3e-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-zlsgw
W0111 22:04:50.726] error: Expected replicas to be 3, was 2
W0111 22:04:50.726] I0111 22:04:50.160927   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244287-392", Name:"frontend", UID:"e954dc3e-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1401", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cnrvs
W0111 22:04:50.727] I0111 22:04:50.446011   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244287-392", Name:"frontend", UID:"e954dc3e-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1406", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-cnrvs
W0111 22:04:50.792] I0111 22:04:50.792049   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244287-392", Name:"redis-master", UID:"ead5d8c0-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-r7rqq
I0111 22:04:50.893] replicationcontroller/redis-master created
I0111 22:04:50.957] replicationcontroller/redis-slave created
W0111 22:04:51.058] I0111 22:04:50.960700   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1547244287-392", Name:"redis-slave", UID:"eaef8478-15ec-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"1423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-jlqgs
... skipping 36 lines ...
I0111 22:04:52.648] service "expose-test-deployment" deleted
I0111 22:04:52.751] Successful
I0111 22:04:52.751] message:service/expose-test-deployment exposed
I0111 22:04:52.751] has:service/expose-test-deployment exposed
I0111 22:04:52.833] service "expose-test-deployment" deleted
I0111 22:04:52.930] Successful
I0111 22:04:52.930] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 22:04:52.930] See 'kubectl expose -h' for help and examples
I0111 22:04:52.931] has:invalid deployment: no selectors
I0111 22:04:53.016] Successful
I0111 22:04:53.017] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0111 22:04:53.017] See 'kubectl expose -h' for help and examples
I0111 22:04:53.017] has:invalid deployment: no selectors
I0111 22:04:53.169] deployment.apps/nginx-deployment created
W0111 22:04:53.270] I0111 22:04:53.173310   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244287-392", Name:"nginx-deployment", UID:"ec411b4b-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1524", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-659fc6fb to 3
W0111 22:04:53.271] I0111 22:04:53.177382   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-659fc6fb", UID:"ec41ab8a-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1525", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-24mgk
W0111 22:04:53.271] I0111 22:04:53.182577   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-659fc6fb", UID:"ec41ab8a-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1525", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-659fc6fb-d5rdn
... skipping 23 lines ...
I0111 22:04:55.577] service "frontend" deleted
I0111 22:04:55.591] service "frontend-2" deleted
I0111 22:04:55.603] service "frontend-3" deleted
I0111 22:04:55.615] service "frontend-4" deleted
I0111 22:04:55.628] service "frontend-5" deleted
I0111 22:04:55.782] Successful
I0111 22:04:55.783] message:error: cannot expose a Node
I0111 22:04:55.783] has:cannot expose
I0111 22:04:55.924] Successful
I0111 22:04:55.925] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0111 22:04:55.925] has:metadata.name: Invalid value
I0111 22:04:56.070] Successful
I0111 22:04:56.071] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0111 22:04:58.745] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 22:04:58.864] core.sh:1233: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0111 22:04:58.953] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0111 22:04:59.073] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 22:04:59.182] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 22:04:59.273] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 22:04:59.374] Error: required flag(s) "max" not set
W0111 22:04:59.374] 
W0111 22:04:59.374] 
W0111 22:04:59.375] Examples:
W0111 22:04:59.375]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 22:04:59.375]   kubectl autoscale deployment foo --min=2 --max=10
W0111 22:04:59.375]   
... skipping 54 lines ...
I0111 22:04:59.658]           limits:
I0111 22:04:59.658]             cpu: 300m
I0111 22:04:59.658]           requests:
I0111 22:04:59.658]             cpu: 300m
I0111 22:04:59.658]       terminationGracePeriodSeconds: 0
I0111 22:04:59.659] status: {}
W0111 22:04:59.760] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0111 22:04:59.937] deployment.apps/nginx-deployment-resources created
W0111 22:05:00.038] I0111 22:04:59.940565   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources", UID:"f0499d3c-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69c96fd869 to 3
W0111 22:05:00.039] I0111 22:04:59.945873   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources-69c96fd869", UID:"f04a4c14-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-8klhg
W0111 22:05:00.039] I0111 22:04:59.949072   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources-69c96fd869", UID:"f04a4c14-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-t4n77
W0111 22:05:00.040] I0111 22:04:59.950879   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources-69c96fd869", UID:"f04a4c14-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69c96fd869-t8t5r
I0111 22:05:00.140] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0111 22:05:00.376] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0111 22:05:00.480] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0111 22:05:00.577] (Bcore.sh:1258: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0111 22:05:00.769] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0111 22:05:00.870] I0111 22:05:00.379803   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources", UID:"f0499d3c-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1678", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c5996c457 to 1
W0111 22:05:00.870] I0111 22:05:00.383377   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources-6c5996c457", UID:"f08d5b56-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1679", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c5996c457-98tz5
W0111 22:05:00.870] error: unable to find container named redis
W0111 22:05:00.871] I0111 22:05:00.780198   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources", UID:"f0499d3c-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69c96fd869 to 2
W0111 22:05:00.871] I0111 22:05:00.785253   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources-69c96fd869", UID:"f04a4c14-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69c96fd869-8klhg
W0111 22:05:00.871] I0111 22:05:00.786991   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources", UID:"f0499d3c-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5f4579485f to 1
W0111 22:05:00.872] I0111 22:05:00.791427   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244287-392", Name:"nginx-deployment-resources-5f4579485f", UID:"f0c94a40-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5f4579485f-46kpp
I0111 22:05:00.972] core.sh:1263: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0111 22:05:00.989] (Bcore.sh:1264: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 79 lines ...
I0111 22:05:01.539]     status: "True"
I0111 22:05:01.539]     type: Progressing
I0111 22:05:01.540]   observedGeneration: 4
I0111 22:05:01.540]   replicas: 4
I0111 22:05:01.540]   unavailableReplicas: 4
I0111 22:05:01.540]   updatedReplicas: 1
W0111 22:05:01.641] error: you must specify resources by --filename when --local is set.
W0111 22:05:01.641] Example resource specifications include:
W0111 22:05:01.641]    '-f rsrc.yaml'
W0111 22:05:01.641]    '--filename=rsrc.json'
I0111 22:05:01.742] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0111 22:05:01.822] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0111 22:05:01.927] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0111 22:05:03.517]                 pod-template-hash=55c9b846cc
I0111 22:05:03.517] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0111 22:05:03.517]                 deployment.kubernetes.io/max-replicas: 2
I0111 22:05:03.517]                 deployment.kubernetes.io/revision: 1
I0111 22:05:03.517] Controlled By:  Deployment/test-nginx-apps
I0111 22:05:03.517] Replicas:       1 current / 1 desired
I0111 22:05:03.518] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:03.518] Pod Template:
I0111 22:05:03.518]   Labels:  app=test-nginx-apps
I0111 22:05:03.518]            pod-template-hash=55c9b846cc
I0111 22:05:03.518]   Containers:
I0111 22:05:03.518]    nginx:
I0111 22:05:03.518]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W0111 22:05:07.744] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0111 22:05:07.744] I0111 22:05:07.254828   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244302-26535", Name:"nginx", UID:"f4568eea-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1883", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9486b7cb7 to 1
W0111 22:05:07.745] I0111 22:05:07.257579   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244302-26535", Name:"nginx-9486b7cb7", UID:"f4a670e4-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1884", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9486b7cb7-r5qhd
I0111 22:05:08.738] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 22:05:08.926] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 22:05:09.027] (Bdeployment.extensions/nginx rolled back
W0111 22:05:09.127] error: unable to find specified revision 1000000 in history
I0111 22:05:10.122] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0111 22:05:10.217] (Bdeployment.extensions/nginx paused
W0111 22:05:10.320] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0111 22:05:10.421] deployment.extensions/nginx resumed
I0111 22:05:10.519] deployment.extensions/nginx rolled back
I0111 22:05:10.701]     deployment.kubernetes.io/revision-history: 1,3
W0111 22:05:10.888] error: desired revision (3) is different from the running revision (5)
I0111 22:05:11.051] deployment.apps/nginx2 created
I0111 22:05:11.141] deployment.extensions "nginx2" deleted
I0111 22:05:11.230] deployment.extensions "nginx" deleted
I0111 22:05:11.329] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:05:11.480] (Bdeployment.apps/nginx-deployment created
I0111 22:05:11.585] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 18 lines ...
W0111 22:05:13.185] I0111 22:05:11.483566   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment", UID:"f72b001c-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1946", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-646d4f779d to 3
W0111 22:05:13.185] I0111 22:05:11.489190   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment-646d4f779d", UID:"f72b9fac-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1947", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-kwmcz
W0111 22:05:13.186] I0111 22:05:11.493405   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment-646d4f779d", UID:"f72b9fac-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1947", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-dmv6l
W0111 22:05:13.186] I0111 22:05:11.493686   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment-646d4f779d", UID:"f72b9fac-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1947", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-646d4f779d-vxxpk
W0111 22:05:13.186] I0111 22:05:11.877427   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment", UID:"f72b001c-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1960", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-85db47bbdb to 1
W0111 22:05:13.186] I0111 22:05:11.881135   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment-85db47bbdb", UID:"f767bf19-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1961", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-85db47bbdb-zrvqr
W0111 22:05:13.187] error: unable to find container named "redis"
W0111 22:05:13.187] I0111 22:05:13.093011   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment", UID:"f72b001c-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-646d4f779d to 2
W0111 22:05:13.187] I0111 22:05:13.098512   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment-646d4f779d", UID:"f72b9fac-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-646d4f779d-kwmcz
W0111 22:05:13.187] I0111 22:05:13.100024   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment", UID:"f72b001c-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1981", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-dc756cc6 to 1
W0111 22:05:13.188] I0111 22:05:13.103424   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment-dc756cc6", UID:"f8204aa9-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1987", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-dc756cc6-dxt5t
I0111 22:05:13.288] apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0111 22:05:13.297] (Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 58 lines ...
I0111 22:05:16.413] replicaset.extensions "frontend" deleted
I0111 22:05:16.511] apps.sh:508: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:05:16.604] (Bapps.sh:512: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:05:16.759] (Breplicaset.apps/frontend-no-cascade created
W0111 22:05:16.860] I0111 22:05:15.464176   56042 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment", UID:"f89ccc0f-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2103", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-669d4f8fc9 to 1
W0111 22:05:16.861] I0111 22:05:15.467346   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244302-26535", Name:"nginx-deployment-669d4f8fc9", UID:"f97383c0-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2112", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-669d4f8fc9-kgctb
W0111 22:05:16.861] E0111 22:05:15.610708   56042 replica_set.go:450] Sync "namespace-1547244302-26535/nginx-deployment-669d4f8fc9" failed with replicasets.apps "nginx-deployment-669d4f8fc9" not found
W0111 22:05:16.861] E0111 22:05:15.711788   56042 replica_set.go:450] Sync "namespace-1547244302-26535/nginx-deployment-75bf89d86f" failed with replicasets.apps "nginx-deployment-75bf89d86f" not found
W0111 22:05:16.861] I0111 22:05:16.327747   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend", UID:"fa0df502-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6qf9m
W0111 22:05:16.861] I0111 22:05:16.330654   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend", UID:"fa0df502-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ddbn6
W0111 22:05:16.862] I0111 22:05:16.330841   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend", UID:"fa0df502-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tkswl
W0111 22:05:16.862] I0111 22:05:16.763046   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend-no-cascade", UID:"fa5097dd-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2157", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-dkkjv
W0111 22:05:16.862] I0111 22:05:16.766039   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend-no-cascade", UID:"fa5097dd-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2157", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-p4hvm
W0111 22:05:16.863] I0111 22:05:16.766079   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend-no-cascade", UID:"fa5097dd-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2157", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-qrt58
... skipping 17 lines ...
I0111 22:05:17.871] Namespace:    namespace-1547244315-10429
I0111 22:05:17.871] Selector:     app=guestbook,tier=frontend
I0111 22:05:17.871] Labels:       app=guestbook
I0111 22:05:17.871]               tier=frontend
I0111 22:05:17.872] Annotations:  <none>
I0111 22:05:17.872] Replicas:     3 current / 3 desired
I0111 22:05:17.872] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:17.872] Pod Template:
I0111 22:05:17.872]   Labels:  app=guestbook
I0111 22:05:17.872]            tier=frontend
I0111 22:05:17.872]   Containers:
I0111 22:05:17.872]    php-redis:
I0111 22:05:17.872]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 22:05:17.994] Namespace:    namespace-1547244315-10429
I0111 22:05:17.994] Selector:     app=guestbook,tier=frontend
I0111 22:05:17.994] Labels:       app=guestbook
I0111 22:05:17.995]               tier=frontend
I0111 22:05:17.995] Annotations:  <none>
I0111 22:05:17.995] Replicas:     3 current / 3 desired
I0111 22:05:17.995] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:17.995] Pod Template:
I0111 22:05:17.995]   Labels:  app=guestbook
I0111 22:05:17.995]            tier=frontend
I0111 22:05:17.995]   Containers:
I0111 22:05:17.995]    php-redis:
I0111 22:05:17.996]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 22:05:18.113] Namespace:    namespace-1547244315-10429
I0111 22:05:18.113] Selector:     app=guestbook,tier=frontend
I0111 22:05:18.113] Labels:       app=guestbook
I0111 22:05:18.113]               tier=frontend
I0111 22:05:18.114] Annotations:  <none>
I0111 22:05:18.114] Replicas:     3 current / 3 desired
I0111 22:05:18.114] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:18.114] Pod Template:
I0111 22:05:18.114]   Labels:  app=guestbook
I0111 22:05:18.114]            tier=frontend
I0111 22:05:18.114]   Containers:
I0111 22:05:18.115]    php-redis:
I0111 22:05:18.115]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0111 22:05:18.244] Namespace:    namespace-1547244315-10429
I0111 22:05:18.245] Selector:     app=guestbook,tier=frontend
I0111 22:05:18.245] Labels:       app=guestbook
I0111 22:05:18.245]               tier=frontend
I0111 22:05:18.245] Annotations:  <none>
I0111 22:05:18.245] Replicas:     3 current / 3 desired
I0111 22:05:18.245] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:18.245] Pod Template:
I0111 22:05:18.245]   Labels:  app=guestbook
I0111 22:05:18.245]            tier=frontend
I0111 22:05:18.245]   Containers:
I0111 22:05:18.245]    php-redis:
I0111 22:05:18.245]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0111 22:05:18.425] Namespace:    namespace-1547244315-10429
I0111 22:05:18.425] Selector:     app=guestbook,tier=frontend
I0111 22:05:18.425] Labels:       app=guestbook
I0111 22:05:18.426]               tier=frontend
I0111 22:05:18.426] Annotations:  <none>
I0111 22:05:18.426] Replicas:     3 current / 3 desired
I0111 22:05:18.426] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:18.426] Pod Template:
I0111 22:05:18.426]   Labels:  app=guestbook
I0111 22:05:18.426]            tier=frontend
I0111 22:05:18.426]   Containers:
I0111 22:05:18.426]    php-redis:
I0111 22:05:18.426]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 22:05:18.536] Namespace:    namespace-1547244315-10429
I0111 22:05:18.536] Selector:     app=guestbook,tier=frontend
I0111 22:05:18.537] Labels:       app=guestbook
I0111 22:05:18.537]               tier=frontend
I0111 22:05:18.537] Annotations:  <none>
I0111 22:05:18.537] Replicas:     3 current / 3 desired
I0111 22:05:18.537] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:18.537] Pod Template:
I0111 22:05:18.537]   Labels:  app=guestbook
I0111 22:05:18.537]            tier=frontend
I0111 22:05:18.537]   Containers:
I0111 22:05:18.537]    php-redis:
I0111 22:05:18.537]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0111 22:05:18.656] Namespace:    namespace-1547244315-10429
I0111 22:05:18.656] Selector:     app=guestbook,tier=frontend
I0111 22:05:18.656] Labels:       app=guestbook
I0111 22:05:18.656]               tier=frontend
I0111 22:05:18.656] Annotations:  <none>
I0111 22:05:18.656] Replicas:     3 current / 3 desired
I0111 22:05:18.656] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:18.657] Pod Template:
I0111 22:05:18.657]   Labels:  app=guestbook
I0111 22:05:18.657]            tier=frontend
I0111 22:05:18.657]   Containers:
I0111 22:05:18.657]    php-redis:
I0111 22:05:18.657]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0111 22:05:18.776] Namespace:    namespace-1547244315-10429
I0111 22:05:18.776] Selector:     app=guestbook,tier=frontend
I0111 22:05:18.776] Labels:       app=guestbook
I0111 22:05:18.776]               tier=frontend
I0111 22:05:18.776] Annotations:  <none>
I0111 22:05:18.776] Replicas:     3 current / 3 desired
I0111 22:05:18.776] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:18.776] Pod Template:
I0111 22:05:18.776]   Labels:  app=guestbook
I0111 22:05:18.777]            tier=frontend
I0111 22:05:18.777]   Containers:
I0111 22:05:18.777]    php-redis:
I0111 22:05:18.777]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0111 22:05:24.131] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0111 22:05:24.227] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0111 22:05:24.310] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0111 22:05:24.410] I0111 22:05:23.677190   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend", UID:"fe6fb48e-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s4dxz
W0111 22:05:24.411] I0111 22:05:23.679976   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend", UID:"fe6fb48e-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rd8nf
W0111 22:05:24.411] I0111 22:05:23.680186   56042 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1547244315-10429", Name:"frontend", UID:"fe6fb48e-15ec-11e9-a03c-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nss4b
W0111 22:05:24.411] Error: required flag(s) "max" not set
W0111 22:05:24.411] 
W0111 22:05:24.411] 
W0111 22:05:24.412] Examples:
W0111 22:05:24.412]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0111 22:05:24.412]   kubectl autoscale deployment foo --min=2 --max=10
W0111 22:05:24.412]   
... skipping 88 lines ...
I0111 22:05:27.468] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0111 22:05:27.563] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0111 22:05:27.680] (Bstatefulset.apps/nginx rolled back
I0111 22:05:27.785] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 22:05:27.878] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 22:05:27.992] (BSuccessful
I0111 22:05:27.993] message:error: unable to find specified revision 1000000 in history
I0111 22:05:27.993] has:unable to find specified revision
I0111 22:05:28.090] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0111 22:05:28.186] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0111 22:05:28.294] (Bstatefulset.apps/nginx rolled back
I0111 22:05:28.397] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0111 22:05:28.496] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0111 22:05:30.349] Name:         mock
I0111 22:05:30.349] Namespace:    namespace-1547244329-3574
I0111 22:05:30.349] Selector:     app=mock
I0111 22:05:30.350] Labels:       app=mock
I0111 22:05:30.350] Annotations:  <none>
I0111 22:05:30.350] Replicas:     1 current / 1 desired
I0111 22:05:30.350] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:30.350] Pod Template:
I0111 22:05:30.350]   Labels:  app=mock
I0111 22:05:30.350]   Containers:
I0111 22:05:30.350]    mock-container:
I0111 22:05:30.350]     Image:        k8s.gcr.io/pause:2.0
I0111 22:05:30.351]     Port:         9949/TCP
... skipping 56 lines ...
I0111 22:05:32.657] Name:         mock
I0111 22:05:32.657] Namespace:    namespace-1547244329-3574
I0111 22:05:32.657] Selector:     app=mock
I0111 22:05:32.657] Labels:       app=mock
I0111 22:05:32.657] Annotations:  <none>
I0111 22:05:32.657] Replicas:     1 current / 1 desired
I0111 22:05:32.657] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:32.657] Pod Template:
I0111 22:05:32.657]   Labels:  app=mock
I0111 22:05:32.657]   Containers:
I0111 22:05:32.657]    mock-container:
I0111 22:05:32.657]     Image:        k8s.gcr.io/pause:2.0
I0111 22:05:32.658]     Port:         9949/TCP
... skipping 56 lines ...
I0111 22:05:34.900] Name:         mock
I0111 22:05:34.900] Namespace:    namespace-1547244329-3574
I0111 22:05:34.900] Selector:     app=mock
I0111 22:05:34.900] Labels:       app=mock
I0111 22:05:34.901] Annotations:  <none>
I0111 22:05:34.901] Replicas:     1 current / 1 desired
I0111 22:05:34.901] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:34.901] Pod Template:
I0111 22:05:34.901]   Labels:  app=mock
I0111 22:05:34.901]   Containers:
I0111 22:05:34.901]    mock-container:
I0111 22:05:34.901]     Image:        k8s.gcr.io/pause:2.0
I0111 22:05:34.901]     Port:         9949/TCP
... skipping 42 lines ...
I0111 22:05:37.063] Namespace:    namespace-1547244329-3574
I0111 22:05:37.063] Selector:     app=mock
I0111 22:05:37.063] Labels:       app=mock
I0111 22:05:37.064]               status=replaced
I0111 22:05:37.064] Annotations:  <none>
I0111 22:05:37.064] Replicas:     1 current / 1 desired
I0111 22:05:37.064] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:37.064] Pod Template:
I0111 22:05:37.064]   Labels:  app=mock
I0111 22:05:37.064]   Containers:
I0111 22:05:37.064]    mock-container:
I0111 22:05:37.064]     Image:        k8s.gcr.io/pause:2.0
I0111 22:05:37.064]     Port:         9949/TCP
... skipping 11 lines ...
I0111 22:05:37.066] Namespace:    namespace-1547244329-3574
I0111 22:05:37.066] Selector:     app=mock2
I0111 22:05:37.066] Labels:       app=mock2
I0111 22:05:37.066]               status=replaced
I0111 22:05:37.066] Annotations:  <none>
I0111 22:05:37.066] Replicas:     1 current / 1 desired
I0111 22:05:37.066] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0111 22:05:37.067] Pod Template:
I0111 22:05:37.067]   Labels:  app=mock2
I0111 22:05:37.067]   Containers:
I0111 22:05:37.067]    mock-container:
I0111 22:05:37.067]     Image:        k8s.gcr.io/pause:2.0
I0111 22:05:37.067]     Port:         9949/TCP
... skipping 127 lines ...
I0111 22:05:43.332] +++ [0111 22:05:43] Creating namespace namespace-1547244343-15365
I0111 22:05:43.405] namespace/namespace-1547244343-15365 created
I0111 22:05:43.476] Context "test" modified.
I0111 22:05:43.482] +++ [0111 22:05:43] Testing persistent volumes claims
I0111 22:05:43.574] storage.sh:57: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: 
I0111 22:05:43.732] (Bpersistentvolumeclaim/myclaim-1 created
W0111 22:05:43.832] E0111 22:05:42.950266   56042 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
W0111 22:05:43.833] I0111 22:05:43.732514   56042 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1547244343-15365", Name:"myclaim-1", UID:"0a645e60-15ed-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"2671", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0111 22:05:43.833] I0111 22:05:43.735548   56042 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1547244343-15365", Name:"myclaim-1", UID:"0a645e60-15ed-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0111 22:05:43.833] I0111 22:05:43.780079   56042 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1547244343-15365", Name:"myclaim-1", UID:"0a645e60-15ed-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"2672", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0111 22:05:43.919] I0111 22:05:43.918814   56042 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1547244343-15365", Name:"myclaim-1", UID:"0a645e60-15ed-11e9-a03c-0242ac110002", APIVersion:"v1", ResourceVersion:"2676", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
I0111 22:05:44.020] storage.sh:60: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-1:
I0111 22:05:44.020] (Bpersistentvolumeclaim "myclaim-1" deleted
... skipping 451 lines ...
I0111 22:05:47.858] yes
I0111 22:05:47.858] has:the server doesn't have a resource type
I0111 22:05:47.943] Successful
I0111 22:05:47.944] message:yes
I0111 22:05:47.944] has:yes
I0111 22:05:48.024] Successful
I0111 22:05:48.025] message:error: --subresource can not be used with NonResourceURL
I0111 22:05:48.025] has:subresource can not be used with NonResourceURL
I0111 22:05:48.111] Successful
I0111 22:05:48.199] Successful
I0111 22:05:48.200] message:yes
I0111 22:05:48.200] 0
I0111 22:05:48.200] has:0
... skipping 6 lines ...
I0111 22:05:48.402] role.rbac.authorization.k8s.io/testing-R reconciled
I0111 22:05:48.500] legacy-script.sh:737: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0111 22:05:48.593] (Blegacy-script.sh:738: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0111 22:05:48.690] (Blegacy-script.sh:739: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0111 22:05:48.786] (Blegacy-script.sh:740: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0111 22:05:48.869] (BSuccessful
I0111 22:05:48.869] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0111 22:05:48.869] has:only rbac.authorization.k8s.io/v1 is supported
I0111 22:05:48.963] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0111 22:05:48.969] role.rbac.authorization.k8s.io "testing-R" deleted
I0111 22:05:48.978] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0111 22:05:48.986] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0111 22:05:48.998] Recording: run_retrieve_multiple_tests
... skipping 1021 lines ...
I0111 22:06:16.509] message:node/127.0.0.1 already uncordoned (dry run)
I0111 22:06:16.509] has:already uncordoned
I0111 22:06:16.599] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0111 22:06:16.677] (Bnode/127.0.0.1 labeled
I0111 22:06:16.771] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0111 22:06:16.838] (BSuccessful
I0111 22:06:16.838] message:error: cannot specify both a node name and a --selector option
I0111 22:06:16.839] See 'kubectl drain -h' for help and examples
I0111 22:06:16.839] has:cannot specify both a node name
I0111 22:06:16.906] Successful
I0111 22:06:16.906] message:error: USAGE: cordon NODE [flags]
I0111 22:06:16.906] See 'kubectl cordon -h' for help and examples
I0111 22:06:16.906] has:error\: USAGE\: cordon NODE
I0111 22:06:16.986] node/127.0.0.1 already uncordoned
I0111 22:06:17.063] Successful
I0111 22:06:17.063] message:error: You must provide one or more resources by argument or filename.
I0111 22:06:17.063] Example resource specifications include:
I0111 22:06:17.063]    '-f rsrc.yaml'
I0111 22:06:17.063]    '--filename=rsrc.json'
I0111 22:06:17.063]    '<resource> <name>'
I0111 22:06:17.064]    '<resource>'
I0111 22:06:17.064] has:must provide one or more resources
... skipping 15 lines ...
I0111 22:06:17.490] Successful
I0111 22:06:17.490] message:The following kubectl-compatible plugins are available:
I0111 22:06:17.490] 
I0111 22:06:17.490] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0111 22:06:17.491]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0111 22:06:17.491] 
I0111 22:06:17.491] error: one plugin warning was found
I0111 22:06:17.491] has:kubectl-version overwrites existing command: "kubectl version"
I0111 22:06:17.564] Successful
I0111 22:06:17.564] message:The following kubectl-compatible plugins are available:
I0111 22:06:17.564] 
I0111 22:06:17.564] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 22:06:17.564] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0111 22:06:17.564]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 22:06:17.565] 
I0111 22:06:17.565] error: one plugin warning was found
I0111 22:06:17.565] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0111 22:06:17.643] Successful
I0111 22:06:17.643] message:The following kubectl-compatible plugins are available:
I0111 22:06:17.643] 
I0111 22:06:17.643] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0111 22:06:17.643] has:plugins are available
I0111 22:06:17.722] Successful
I0111 22:06:17.723] message:
I0111 22:06:17.723] error: unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" in your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory
I0111 22:06:17.723] error: unable to find any kubectl plugins in your PATH
I0111 22:06:17.723] has:unable to find any kubectl plugins in your PATH
I0111 22:06:17.797] Successful
I0111 22:06:17.797] message:I am plugin foo
I0111 22:06:17.797] has:plugin foo
I0111 22:06:17.880] Successful
I0111 22:06:17.881] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.0.1650+2396de75dbd2d9", GitCommit:"2396de75dbd2d9ff57965ecdea288d1b826502ad", GitTreeState:"clean", BuildDate:"2019-01-11T21:59:29Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0111 22:06:17.957] 
I0111 22:06:17.959] +++ Running case: test-cmd.run_impersonation_tests 
I0111 22:06:17.961] +++ working dir: /go/src/k8s.io/kubernetes
I0111 22:06:17.963] +++ command: run_impersonation_tests
I0111 22:06:17.972] +++ [0111 22:06:17] Testing impersonation
I0111 22:06:18.046] Successful
I0111 22:06:18.047] message:error: requesting groups or user-extra for  without impersonating a user
I0111 22:06:18.047] has:without impersonating a user
I0111 22:06:18.216] certificatesigningrequest.certificates.k8s.io/foo created
I0111 22:06:18.320] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0111 22:06:18.419] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0111 22:06:18.508] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0111 22:06:18.696] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 19 lines ...
W0111 22:06:19.230] I0111 22:06:19.227878   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.230] I0111 22:06:19.227900   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.231] I0111 22:06:19.227984   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.231] I0111 22:06:19.227994   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.231] I0111 22:06:19.228003   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.231] I0111 22:06:19.228005   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.231] W0111 22:06:19.227986   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.232] W0111 22:06:19.228035   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.232] I0111 22:06:19.228155   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.232] I0111 22:06:19.228165   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.232] I0111 22:06:19.228319   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.232] I0111 22:06:19.228327   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.232] I0111 22:06:19.228350   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.233] I0111 22:06:19.228356   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 19 lines ...
W0111 22:06:19.236] I0111 22:06:19.228788   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.237] I0111 22:06:19.228795   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.237] I0111 22:06:19.228829   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.237] I0111 22:06:19.228842   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.237] I0111 22:06:19.228874   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.237] I0111 22:06:19.228881   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.238] W0111 22:06:19.228890   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.238] W0111 22:06:19.228906   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.238] W0111 22:06:19.228912   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.238] I0111 22:06:19.228933   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.238] I0111 22:06:19.228955   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.239] W0111 22:06:19.228957   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.239] I0111 22:06:19.228940   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.239] W0111 22:06:19.228967   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.239] W0111 22:06:19.228995   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.240] W0111 22:06:19.229027   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.240] W0111 22:06:19.229099   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.240] W0111 22:06:19.229159   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.241] W0111 22:06:19.229294   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.241] W0111 22:06:19.229316   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.241] W0111 22:06:19.229343   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.241] W0111 22:06:19.229348   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.241] I0111 22:06:19.229373   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0111 22:06:19.242] I0111 22:06:19.229392   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.242] I0111 22:06:19.228973   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.242] W0111 22:06:19.228942   52691 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0111 22:06:19.242] I0111 22:06:19.229664   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.242] E0111 22:06:19.229664   52691 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W0111 22:06:19.242] I0111 22:06:19.229675   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.242] I0111 22:06:19.229709   52691 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0111 22:06:19.286] + make test-integration
I0111 22:06:19.386] No resources found
I0111 22:06:19.386] pod "test-pod-1" force deleted
I0111 22:06:19.386] +++ [0111 22:06:19] TESTS PASSED
... skipping 12 lines ...
I0111 22:11:32.834] ok  	k8s.io/kubernetes/test/integration/apimachinery	156.009s
I0111 22:11:32.835] ok  	k8s.io/kubernetes/test/integration/apiserver	37.872s
I0111 22:11:32.836] [restful] 2019/01/11 22:09:01 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:42917/swaggerapi
I0111 22:11:32.836] [restful] 2019/01/11 22:09:01 log.go:33: [restful/swagger] https://127.0.0.1:42917/swaggerui/ is mapped to folder /swagger-ui/
I0111 22:11:32.836] [restful] 2019/01/11 22:09:03 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:42917/swaggerapi
I0111 22:11:32.836] [restful] 2019/01/11 22:09:03 log.go:33: [restful/swagger] https://127.0.0.1:42917/swaggerui/ is mapped to folder /swagger-ui/
I0111 22:11:32.836] FAIL	k8s.io/kubernetes/test/integration/auth	215.845s
I0111 22:11:32.836] [restful] 2019/01/11 22:07:53 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:43411/swaggerapi
I0111 22:11:32.837] [restful] 2019/01/11 22:07:53 log.go:33: [restful/swagger] https://127.0.0.1:43411/swaggerui/ is mapped to folder /swagger-ui/
I0111 22:11:32.837] [restful] 2019/01/11 22:07:55 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:43411/swaggerapi
I0111 22:11:32.837] [restful] 2019/01/11 22:07:55 log.go:33: [restful/swagger] https://127.0.0.1:43411/swaggerui/ is mapped to folder /swagger-ui/
I0111 22:11:32.838] [restful] 2019/01/11 22:08:02 log.go:33: [restful/swagger] listing is available at https://127.0.0.1:42541/swaggerapi
I0111 22:11:32.838] [restful] 2019/01/11 22:08:02 log.go:33: [restful/swagger] https://127.0.0.1:42541/swaggerui/ is mapped to folder /swagger-ui/
... skipping 228 lines ...
I0111 22:19:09.978] [restful] 2019/01/11 22:12:31 log.go:33: [restful/swagger] https://127.0.0.1:43555/swaggerui/ is mapped to folder /swagger-ui/
I0111 22:19:09.978] ok  	k8s.io/kubernetes/test/integration/tls	15.205s
I0111 22:19:09.978] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	11.455s
I0111 22:19:09.978] ok  	k8s.io/kubernetes/test/integration/volume	92.694s
I0111 22:19:09.978] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	145.847s
I0111 22:19:23.623] +++ [0111 22:19:23] Saved JUnit XML test report to /workspace/artifacts/junit_4a55e0dab36e58da54f277b74e7f2598a8df8500_20190111-220628.xml
I0111 22:19:23.626] Makefile:184: recipe for target 'test' failed
I0111 22:19:23.637] +++ [0111 22:19:23] Cleaning up etcd
W0111 22:19:23.737] make[1]: *** [test] Error 1
W0111 22:19:23.738] !!! [0111 22:19:23] Call tree:
W0111 22:19:23.738] !!! [0111 22:19:23]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0111 22:19:23.885] +++ [0111 22:19:23] Integration test cleanup complete
I0111 22:19:23.886] Makefile:203: recipe for target 'test-integration' failed
W0111 22:19:23.987] make: *** [test-integration] Error 1
W0111 22:19:26.247] Traceback (most recent call last):
W0111 22:19:26.247]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0111 22:19:26.248]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0111 22:19:26.248]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0111 22:19:26.248]     check(*cmd)
W0111 22:19:26.248]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0111 22:19:26.248]     subprocess.check_call(cmd)
W0111 22:19:26.248]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0111 22:19:26.248]     raise CalledProcessError(retcode, cmd)
W0111 22:19:26.249] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20181218-db74ab3f4', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0111 22:19:26.254] Command failed
I0111 22:19:26.254] process 701 exited with code 1 after 25.9m
E0111 22:19:26.254] FAIL: pull-kubernetes-integration
I0111 22:19:26.255] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0111 22:19:26.756] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0111 22:19:26.813] process 124215 exited with code 0 after 0.0m
I0111 22:19:26.813] Call:  gcloud config get-value account
I0111 22:19:27.115] process 124227 exited with code 0 after 0.0m
I0111 22:19:27.115] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0111 22:19:27.116] Upload result and artifacts...
I0111 22:19:27.116] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/72730/pull-kubernetes-integration/41067
I0111 22:19:27.116] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/72730/pull-kubernetes-integration/41067/artifacts
W0111 22:19:28.289] CommandException: One or more URLs matched no objects.
E0111 22:19:28.440] Command failed
I0111 22:19:28.441] process 124239 exited with code 1 after 0.0m
W0111 22:19:28.441] Remote dir gs://kubernetes-jenkins/pr-logs/pull/72730/pull-kubernetes-integration/41067/artifacts not exist yet
I0111 22:19:28.441] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/72730/pull-kubernetes-integration/41067/artifacts
I0111 22:19:31.942] process 124381 exited with code 0 after 0.1m
W0111 22:19:31.943] metadata path /workspace/_artifacts/metadata.json does not exist
W0111 22:19:31.943] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...