This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-11 15:22
Elapsed27m17s
Revision
Buildergke-prow-ssd-pool-1a225945-c9hz
Refs master:aa07db3b
82053:07ba65df
82060:45d6f088
82064:6c46135f
82095:a068e071
82113:8dc401d1
82121:aa20910e
82161:4558dd40
82170:6392b69a
82175:9828f986
82187:7d4bb382
82193:f1b314bf
82209:89a70fa4
82210:270ddcea
82222:d48e47a9
82224:6b961eb0
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ca4a2b84-71f2-49fb-b6f3-f823e57f4467/targets/test'}}
pode6fc4fc4-d4a7-11e9-affb-2e2498aeb6d1
resultstorehttps://source.cloud.google.com/results/invocations/ca4a2b84-71f2-49fb-b6f3-f823e57f4467/targets/test
infra-commit3d071c448
pode6fc4fc4-d4a7-11e9-affb-2e2498aeb6d1
repok8s.io/kubernetes
repo-commit413d9623e87abbdea53300f5224b1b03b74dd65b
repos{u'k8s.io/kubernetes': u'master:aa07db3b775415cc949ddab2ecfeddeff9dac8bb,82053:07ba65df6d69cf951063f08e52eafe218be21481,82060:45d6f08868ce2729182aae5734a00e5e27ae08f9,82064:6c46135ff5647b97aa6e38023b9e749a448d6536,82095:a068e0717410c877f94e9bc99166d1b6328beac2,82113:8dc401d14161e757540ab46e40ed443d0300cdb5,82121:aa20910e242f5b708d337a6fbebda5d9e35b88b5,82161:4558dd407ac8e692dc41c47268e15ff692949ff2,82170:6392b69a1d010f1c1453fc1a3b346e3ff2d708b2,82175:9828f986afd4db79a10c78bee1cc2e449faee3a6,82187:7d4bb382474893c5c3ed6e375b5ff4a99862eca0,82193:f1b314bf5a678e061fa748bc8c9497f40fad0784,82209:89a70fa407b10329e5e71de35d94616e8d444b2d,82210:270ddcea236d99cc8098c216199112913e1c11d4,82222:d48e47a95efecb9cdedce0877d8f7519c489775c,82224:6b961eb08cbcd06746835132767af4fa29fe5e39'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestBindPlugin 1m5s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestBindPlugin$
=== RUN   TestBindPlugin
W0911 15:44:42.429552  108799 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0911 15:44:42.429574  108799 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0911 15:44:42.429588  108799 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0911 15:44:42.429599  108799 master.go:259] Using reconciler: 
I0911 15:44:42.431893  108799 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.432242  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.432341  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.434327  108799 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0911 15:44:42.434364  108799 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.434417  108799 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0911 15:44:42.434678  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.434703  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.436238  108799 watch_cache.go:405] Replace watchCache (rev: 28206) 
I0911 15:44:42.437009  108799 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 15:44:42.437047  108799 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.437066  108799 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 15:44:42.437183  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.437202  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.438623  108799 watch_cache.go:405] Replace watchCache (rev: 28206) 
I0911 15:44:42.440376  108799 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0911 15:44:42.440415  108799 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.440542  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.440562  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.440633  108799 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0911 15:44:42.442337  108799 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0911 15:44:42.442502  108799 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.442645  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.442663  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.442738  108799 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0911 15:44:42.445067  108799 watch_cache.go:405] Replace watchCache (rev: 28211) 
I0911 15:44:42.445418  108799 watch_cache.go:405] Replace watchCache (rev: 28211) 
I0911 15:44:42.445698  108799 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0911 15:44:42.445919  108799 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.446024  108799 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0911 15:44:42.446071  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.446089  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.449487  108799 watch_cache.go:405] Replace watchCache (rev: 28211) 
I0911 15:44:42.451067  108799 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0911 15:44:42.451305  108799 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.451460  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.451483  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.451595  108799 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0911 15:44:42.454132  108799 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0911 15:44:42.454470  108799 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0911 15:44:42.455668  108799 watch_cache.go:405] Replace watchCache (rev: 28221) 
I0911 15:44:42.456109  108799 watch_cache.go:405] Replace watchCache (rev: 28221) 
I0911 15:44:42.457877  108799 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.458083  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.458105  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.459149  108799 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0911 15:44:42.459191  108799 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0911 15:44:42.459328  108799 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.459469  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.459496  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.461961  108799 watch_cache.go:405] Replace watchCache (rev: 28225) 
I0911 15:44:42.462268  108799 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0911 15:44:42.462415  108799 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.462579  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.462600  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.462676  108799 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0911 15:44:42.463749  108799 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0911 15:44:42.463889  108799 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.464067  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.464092  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.464178  108799 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0911 15:44:42.464374  108799 watch_cache.go:405] Replace watchCache (rev: 28225) 
I0911 15:44:42.465680  108799 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0911 15:44:42.465838  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.465987  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.466015  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.466093  108799 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0911 15:44:42.467511  108799 watch_cache.go:405] Replace watchCache (rev: 28226) 
I0911 15:44:42.467592  108799 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0911 15:44:42.467623  108799 watch_cache.go:405] Replace watchCache (rev: 28226) 
I0911 15:44:42.467739  108799 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.467849  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.467867  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.467931  108799 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0911 15:44:42.468900  108799 watch_cache.go:405] Replace watchCache (rev: 28226) 
I0911 15:44:42.469991  108799 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0911 15:44:42.470126  108799 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.470261  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.470279  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.470363  108799 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0911 15:44:42.471286  108799 watch_cache.go:405] Replace watchCache (rev: 28227) 
I0911 15:44:42.471521  108799 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0911 15:44:42.471548  108799 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.471596  108799 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0911 15:44:42.471732  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.471751  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.472286  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.472305  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.472556  108799 watch_cache.go:405] Replace watchCache (rev: 28227) 
I0911 15:44:42.473179  108799 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.473321  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.473346  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.474025  108799 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0911 15:44:42.474049  108799 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0911 15:44:42.474479  108799 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.474651  108799 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.475041  108799 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0911 15:44:42.475309  108799 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.476741  108799 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.477734  108799 watch_cache.go:405] Replace watchCache (rev: 28227) 
I0911 15:44:42.477721  108799 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.478534  108799 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.479030  108799 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.479176  108799 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.479358  108799 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.479807  108799 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.480439  108799 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.480771  108799 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.481795  108799 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.482219  108799 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.484601  108799 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.484981  108799 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.485805  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.486196  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.486458  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.486804  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.487171  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.487479  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.487654  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.488579  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.488990  108799 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.489976  108799 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.495151  108799 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.495593  108799 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.496025  108799 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.497174  108799 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.497626  108799 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.498578  108799 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.499670  108799 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.501120  108799 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.503537  108799 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.504121  108799 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.504457  108799 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0911 15:44:42.504590  108799 master.go:461] Enabling API group "authentication.k8s.io".
I0911 15:44:42.504699  108799 master.go:461] Enabling API group "authorization.k8s.io".
I0911 15:44:42.505048  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.505366  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.505501  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.506727  108799 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 15:44:42.507069  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.507275  108799 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 15:44:42.507523  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.507639  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.508985  108799 watch_cache.go:405] Replace watchCache (rev: 28244) 
I0911 15:44:42.508996  108799 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 15:44:42.509079  108799 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 15:44:42.509214  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.509328  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.509345  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.510911  108799 watch_cache.go:405] Replace watchCache (rev: 28244) 
I0911 15:44:42.513562  108799 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0911 15:44:42.513584  108799 master.go:461] Enabling API group "autoscaling".
I0911 15:44:42.513734  108799 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.513878  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.513900  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.514141  108799 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0911 15:44:42.514787  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.517018  108799 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0911 15:44:42.517299  108799 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.517534  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.517623  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.517845  108799 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0911 15:44:42.519154  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.519557  108799 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0911 15:44:42.519580  108799 master.go:461] Enabling API group "batch".
I0911 15:44:42.519710  108799 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0911 15:44:42.519721  108799 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.519821  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.519839  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.521054  108799 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0911 15:44:42.521096  108799 master.go:461] Enabling API group "certificates.k8s.io".
I0911 15:44:42.521231  108799 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.521352  108799 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0911 15:44:42.521375  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.521392  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.522380  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.522628  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.523770  108799 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 15:44:42.523838  108799 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 15:44:42.523977  108799 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.524197  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.524225  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.525183  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.525802  108799 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0911 15:44:42.525822  108799 master.go:461] Enabling API group "coordination.k8s.io".
I0911 15:44:42.525837  108799 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0911 15:44:42.526013  108799 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.526162  108799 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0911 15:44:42.526175  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.526192  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.527213  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.528250  108799 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 15:44:42.528274  108799 master.go:461] Enabling API group "extensions".
I0911 15:44:42.528425  108799 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.528547  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.528606  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.528890  108799 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 15:44:42.529390  108799 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0911 15:44:42.529513  108799 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.529648  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.529666  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.529704  108799 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0911 15:44:42.529786  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.530362  108799 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0911 15:44:42.530384  108799 master.go:461] Enabling API group "networking.k8s.io".
I0911 15:44:42.530416  108799 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.530581  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.530601  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.530685  108799 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0911 15:44:42.530794  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.531385  108799 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0911 15:44:42.531415  108799 master.go:461] Enabling API group "node.k8s.io".
I0911 15:44:42.531552  108799 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.531706  108799 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0911 15:44:42.531896  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.531915  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.532137  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.532845  108799 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0911 15:44:42.533015  108799 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0911 15:44:42.533010  108799 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.533257  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.533285  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.533776  108799 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0911 15:44:42.533803  108799 master.go:461] Enabling API group "policy".
I0911 15:44:42.533833  108799 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.533921  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.533983  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.534070  108799 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0911 15:44:42.534209  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.535837  108799 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 15:44:42.535905  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.536007  108799 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.536087  108799 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 15:44:42.536163  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.536181  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.537437  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.537677  108799 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 15:44:42.537740  108799 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.537792  108799 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 15:44:42.537877  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.537893  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.538809  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.539280  108799 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 15:44:42.539414  108799 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.539513  108799 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 15:44:42.539542  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.539559  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.540991  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.541445  108799 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 15:44:42.541493  108799 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.541608  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.541625  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.541702  108799 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 15:44:42.544028  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.544317  108799 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0911 15:44:42.544462  108799 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.544586  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.544603  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.544672  108799 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0911 15:44:42.547348  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.548323  108799 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0911 15:44:42.548349  108799 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0911 15:44:42.548359  108799 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.548483  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.548519  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.548891  108799 watch_cache.go:405] Replace watchCache (rev: 28249) 
I0911 15:44:42.549572  108799 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0911 15:44:42.549726  108799 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.549750  108799 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0911 15:44:42.549850  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.549866  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.550452  108799 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0911 15:44:42.550485  108799 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0911 15:44:42.551151  108799 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0911 15:44:42.551432  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.552506  108799 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.552651  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.552669  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.552711  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.553636  108799 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 15:44:42.553799  108799 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.553923  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.553959  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.554094  108799 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 15:44:42.555029  108799 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0911 15:44:42.555054  108799 master.go:461] Enabling API group "scheduling.k8s.io".
I0911 15:44:42.555228  108799 master.go:450] Skipping disabled API group "settings.k8s.io".
I0911 15:44:42.555401  108799 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.555519  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.555535  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.555605  108799 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0911 15:44:42.556129  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.556784  108799 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 15:44:42.556915  108799 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.557049  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.557066  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.557136  108799 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 15:44:42.557680  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.558113  108799 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 15:44:42.558143  108799 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.558319  108799 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 15:44:42.558515  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.558876  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.558915  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.560317  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.561503  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.563683  108799 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0911 15:44:42.563717  108799 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.563838  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.563855  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.564021  108799 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0911 15:44:42.565137  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.565459  108799 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0911 15:44:42.565588  108799 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.565700  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.565716  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.565782  108799 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0911 15:44:42.567251  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.567403  108799 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0911 15:44:42.567515  108799 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0911 15:44:42.567529  108799 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.567630  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.567647  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.568789  108799 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0911 15:44:42.568811  108799 master.go:461] Enabling API group "storage.k8s.io".
I0911 15:44:42.568903  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.568954  108799 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.569980  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.570004  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.569052  108799 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0911 15:44:42.570978  108799 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0911 15:44:42.571127  108799 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.571224  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.571240  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.571329  108799 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0911 15:44:42.571646  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.572899  108799 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0911 15:44:42.572959  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.573101  108799 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.573187  108799 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0911 15:44:42.573199  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.573215  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.574272  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.574549  108799 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0911 15:44:42.574695  108799 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.574794  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.574811  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.574990  108799 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0911 15:44:42.576573  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.576694  108799 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0911 15:44:42.576851  108799 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0911 15:44:42.576838  108799 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.577074  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.577089  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.578196  108799 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0911 15:44:42.578222  108799 master.go:461] Enabling API group "apps".
I0911 15:44:42.578253  108799 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.578369  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.578401  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.578474  108799 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0911 15:44:42.579057  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.579578  108799 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 15:44:42.579609  108799 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.579718  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.579737  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.579808  108799 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 15:44:42.580505  108799 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 15:44:42.580539  108799 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.580642  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.580658  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.580717  108799 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 15:44:42.581604  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.581714  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.582302  108799 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0911 15:44:42.582335  108799 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.582453  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.582469  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.582542  108799 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0911 15:44:42.583137  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.583401  108799 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0911 15:44:42.583423  108799 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0911 15:44:42.583451  108799 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.583651  108799 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0911 15:44:42.583669  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:42.583684  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:42.586412  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.586419  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.587541  108799 store.go:1342] Monitoring events count at <storage-prefix>//events
I0911 15:44:42.587576  108799 master.go:461] Enabling API group "events.k8s.io".
I0911 15:44:42.587774  108799 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588010  108799 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588240  108799 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588345  108799 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588438  108799 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588537  108799 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588701  108799 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588801  108799 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588893  108799 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.588998  108799 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.589253  108799 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0911 15:44:42.590061  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.590494  108799 watch_cache.go:405] Replace watchCache (rev: 28250) 
I0911 15:44:42.590329  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.591557  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.592031  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.592824  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.593109  108799 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.593820  108799 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.594071  108799 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.594998  108799 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.595235  108799 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 15:44:42.595291  108799 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0911 15:44:42.595852  108799 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.596040  108799 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.596230  108799 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.599120  108799 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.599833  108799 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.601546  108799 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.601791  108799 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.604494  108799 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.606328  108799 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.607785  108799 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.610223  108799 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 15:44:42.610454  108799 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0911 15:44:42.611416  108799 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.611843  108799 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.612609  108799 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.613401  108799 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.614135  108799 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.614930  108799 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.615792  108799 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.616561  108799 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.617240  108799 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.618051  108799 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.618821  108799 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 15:44:42.619059  108799 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0911 15:44:42.619760  108799 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.620489  108799 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 15:44:42.620692  108799 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0911 15:44:42.621452  108799 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.622168  108799 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.622620  108799 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.623530  108799 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.624184  108799 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.624830  108799 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.625534  108799 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 15:44:42.625789  108799 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0911 15:44:42.626904  108799 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.627850  108799 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.628332  108799 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.629420  108799 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.629838  108799 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.630243  108799 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.631117  108799 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.631556  108799 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.631967  108799 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.633026  108799 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.633419  108799 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.634003  108799 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0911 15:44:42.634212  108799 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0911 15:44:42.634313  108799 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0911 15:44:42.635344  108799 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.636389  108799 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.637388  108799 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.638462  108799 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.639924  108799 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"c95f27fb-173a-429f-86bd-1b9c16d44bb5", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0911 15:44:42.645053  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.378424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:42.645233  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:42.645476  108799 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0911 15:44:42.645651  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:42.645793  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:42.645917  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:42.646121  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:42.646386  108799 httplog.go:90] GET /healthz: (1.574323ms) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:42.648632  108799 httplog.go:90] GET /api/v1/services: (1.133242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.652157  108799 httplog.go:90] GET /api/v1/services: (934.607µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.654491  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:42.654514  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:42.654526  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:42.654534  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:42.654542  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:42.654564  108799 httplog.go:90] GET /healthz: (165.332µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:42.656695  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.061129ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.658597  108799 httplog.go:90] GET /api/v1/services: (1.538102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:42.661671  108799 httplog.go:90] GET /api/v1/services: (1.26861ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:42.665297  108799 httplog.go:90] POST /api/v1/namespaces: (7.859807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.667820  108799 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.269643ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.670725  108799 httplog.go:90] POST /api/v1/namespaces: (1.892281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.672247  108799 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.25531ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.673973  108799 httplog.go:90] POST /api/v1/namespaces: (1.445487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.747491  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:42.747541  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:42.747554  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:42.747563  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:42.747572  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:42.747600  108799 httplog.go:90] GET /healthz: (264.48µs) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:42.755664  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:42.755695  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:42.755714  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:42.755724  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:42.755731  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:42.755763  108799 httplog.go:90] GET /healthz: (248.64µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.847595  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:42.847633  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:42.847645  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:42.847655  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:42.847665  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:42.847696  108799 httplog.go:90] GET /healthz: (310.666µs) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:42.855639  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:42.855675  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:42.855688  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:42.855697  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:42.855705  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:42.855742  108799 httplog.go:90] GET /healthz: (233.18µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:42.947533  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:42.947868  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:42.948007  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:42.948099  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:42.948187  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:42.948396  108799 httplog.go:90] GET /healthz: (1.013687ms) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:42.955626  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:42.955865  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:42.956014  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:42.956103  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:42.956186  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:42.956429  108799 httplog.go:90] GET /healthz: (895.698µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.047517  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:43.047556  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.047569  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.047578  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.047586  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.047626  108799 httplog.go:90] GET /healthz: (260.084µs) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:43.055656  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:43.055693  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.055705  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.055714  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.055722  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.055768  108799 httplog.go:90] GET /healthz: (264.174µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.147508  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:43.147551  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.147565  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.147574  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.147584  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.147644  108799 httplog.go:90] GET /healthz: (288.331µs) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:43.155617  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:43.155653  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.155680  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.155759  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.155796  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.155838  108799 httplog.go:90] GET /healthz: (350.947µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.247639  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:43.247686  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.247698  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.247708  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.247716  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.247748  108799 httplog.go:90] GET /healthz: (258.543µs) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:43.255598  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:43.255630  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.255643  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.255652  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.255660  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.255701  108799 httplog.go:90] GET /healthz: (258.846µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.347440  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:43.347479  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.347492  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.347502  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.347510  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.347556  108799 httplog.go:90] GET /healthz: (259.866µs) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:43.355650  108799 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0911 15:44:43.355693  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.355705  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.355715  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.355725  108799 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.355768  108799 httplog.go:90] GET /healthz: (278.881µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.432000  108799 client.go:361] parsed scheme: "endpoint"
I0911 15:44:43.432097  108799 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0911 15:44:43.448554  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.448583  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.448592  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.448599  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.448643  108799 httplog.go:90] GET /healthz: (1.367896ms) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:43.456366  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.456409  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.456420  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.456428  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.456466  108799 httplog.go:90] GET /healthz: (972.033µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.548393  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.548421  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.548431  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.548439  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.548479  108799 httplog.go:90] GET /healthz: (1.207938ms) 0 [Go-http-client/1.1 127.0.0.1:47396]
I0911 15:44:43.556612  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.556643  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.556654  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.556662  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.556717  108799 httplog.go:90] GET /healthz: (1.269498ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.645445  108799 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (1.410556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.645629  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.589676ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.647641  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.560316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47564]
I0911 15:44:43.647707  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.713786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.647655  108799 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.782147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.648071  108799 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0911 15:44:43.650047  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.650069  108799 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0911 15:44:43.650079  108799 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0911 15:44:43.650087  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0911 15:44:43.650114  108799 httplog.go:90] GET /healthz: (1.907648ms) 0 [Go-http-client/1.1 127.0.0.1:47566]
I0911 15:44:43.650131  108799 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.788341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47564]
I0911 15:44:43.650166  108799 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.709314ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.650307  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.603656ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.651754  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.071384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.652323  108799 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.842636ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.652482  108799 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.998546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47564]
I0911 15:44:43.652991  108799 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0911 15:44:43.653011  108799 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0911 15:44:43.653733  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.63833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47396]
I0911 15:44:43.654865  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (698.114µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.655900  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (669.383µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.656185  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.656207  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:43.656236  108799 httplog.go:90] GET /healthz: (912.068µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.656998  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (711.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.658404  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (821.663µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.659353  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (685.682µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.661297  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.592065ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.661582  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0911 15:44:43.662427  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (690.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.664293  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.555172ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.664493  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0911 15:44:43.665457  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (738.666µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.667325  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.533512ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.667578  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0911 15:44:43.668835  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.030989ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.670855  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.459106ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.671105  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0911 15:44:43.672096  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (735.833µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.673827  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.400082ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.674142  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0911 15:44:43.675307  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (951.976µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.677044  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.378445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.677296  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0911 15:44:43.678312  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (811.149µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.680093  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.330883ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.680277  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0911 15:44:43.681509  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.009682ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.683462  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.557994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.683712  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0911 15:44:43.684786  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (776.13µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.686983  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.738473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.687255  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0911 15:44:43.688257  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (760.339µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.690346  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.683549ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.690680  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0911 15:44:43.691580  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (717.197µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.693319  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.291055ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.693459  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0911 15:44:43.694336  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (750.825µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.696361  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.678565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.696690  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0911 15:44:43.697734  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (893.842µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.699384  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.23611ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.699670  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0911 15:44:43.700521  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (705.881µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.702193  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.345156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.702557  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0911 15:44:43.703618  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (871.673µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.705262  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.248329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.705565  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0911 15:44:43.706720  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (977.597µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.708587  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.413551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.709036  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0911 15:44:43.709901  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (716.346µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.711506  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.280364ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.711773  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0911 15:44:43.712732  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (684.227µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.714927  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.651062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.715239  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 15:44:43.716306  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (777.532µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.718060  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.26476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.718385  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0911 15:44:43.719630  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (958.145µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.721669  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.630693ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.721900  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0911 15:44:43.722899  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (817.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.724839  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.450344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.725145  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0911 15:44:43.726222  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (778.639µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.727922  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.21306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.728173  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0911 15:44:43.729420  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (905.87µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.731285  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.385946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.731527  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0911 15:44:43.732751  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (929.486µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.734549  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.734786  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0911 15:44:43.735767  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (702.501µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.737809  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.511133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.738121  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0911 15:44:43.739373  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (913.273µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.741827  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.745157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.742273  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0911 15:44:43.743380  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (813.011µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.745612  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.944997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.746032  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0911 15:44:43.747778  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.350743ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.749387  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.749459  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:43.749546  108799 httplog.go:90] GET /healthz: (1.824149ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:43.752183  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.671637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.752463  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 15:44:43.753506  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (885.886µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.756313  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.479594ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.757036  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 15:44:43.757705  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.757758  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:43.757805  108799 httplog.go:90] GET /healthz: (2.098285ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.759576  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.31878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.762530  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.262823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.763071  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 15:44:43.765081  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.625714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.768475  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.703048ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.768917  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 15:44:43.771147  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.596982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.774048  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.858284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.774502  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 15:44:43.776395  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.240285ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.779240  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.702894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.779571  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 15:44:43.781575  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.591191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.784812  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.245386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.785485  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 15:44:43.787248  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.480338ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.791022  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.070064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.791830  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 15:44:43.793551  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.3277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.796478  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.192969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.796796  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 15:44:43.798789  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.776693ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.802400  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.029439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.802757  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 15:44:43.804303  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.247622ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.807631  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.518892ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.808096  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0911 15:44:43.809797  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.377554ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.813093  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.376435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.813570  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 15:44:43.815200  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.20469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.818493  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.592814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.818984  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0911 15:44:43.820795  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.43408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.824314  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.768373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.825111  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 15:44:43.826753  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.252016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.830199  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.405833ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.830471  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 15:44:43.832210  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.449967ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.835119  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.251678ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.835418  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 15:44:43.837167  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.49269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.839856  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.806557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.840141  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 15:44:43.841489  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.114915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.844659  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.291935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.845025  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 15:44:43.847259  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.977597ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.848354  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.848570  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:43.850044  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.181822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.850401  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0911 15:44:43.850051  108799 httplog.go:90] GET /healthz: (2.039625ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:43.852163  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.497403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.855307  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.515068ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.855552  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 15:44:43.856585  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.856625  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:43.856678  108799 httplog.go:90] GET /healthz: (1.20223ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.857111  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.29047ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.861062  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.245018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.861478  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0911 15:44:43.863459  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.404836ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.866300  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.248154ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.867122  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 15:44:43.869237  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.716857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.873123  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.979474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.873420  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 15:44:43.875516  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.615063ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.878672  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.587807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.879429  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 15:44:43.886773  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (2.073016ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.908749  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.146338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.909214  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 15:44:43.927413  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (2.805304ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.948716  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.948759  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:43.948807  108799 httplog.go:90] GET /healthz: (1.218799ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:43.949019  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.248184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:43.949345  108799 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 15:44:43.957634  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:43.957692  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:43.957768  108799 httplog.go:90] GET /healthz: (1.960689ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.966563  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (2.147324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.988482  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.758178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:43.989298  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0911 15:44:44.007284  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (2.391238ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.028310  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.833078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.029072  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0911 15:44:44.047194  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (2.782733ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.048918  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.048998  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.049085  108799 httplog.go:90] GET /healthz: (1.563908ms) 0 [Go-http-client/1.1 127.0.0.1:47566]
I0911 15:44:44.070025  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.070102  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.070186  108799 httplog.go:90] GET /healthz: (2.245095ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.074985  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.794034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.075576  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0911 15:44:44.102966  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (17.073673ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.108746  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.343367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.109391  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0911 15:44:44.125844  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.420278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.146490  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.170992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.146682  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0911 15:44:44.148063  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.148087  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.148118  108799 httplog.go:90] GET /healthz: (971.879µs) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:44.156491  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.156618  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.156764  108799 httplog.go:90] GET /healthz: (1.328846ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.165624  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.338335ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.188285  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.908836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.188541  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0911 15:44:44.205868  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.601918ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.227288  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.939745ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.227549  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0911 15:44:44.245996  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.524688ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.248352  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.248385  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.248419  108799 httplog.go:90] GET /healthz: (1.169684ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:44.256582  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.256614  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.256658  108799 httplog.go:90] GET /healthz: (1.204145ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.266607  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.392046ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.266885  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0911 15:44:44.285686  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.369032ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.306436  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.208117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.306663  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0911 15:44:44.326442  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.067552ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.347016  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.670969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.347414  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0911 15:44:44.348388  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.348409  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.348445  108799 httplog.go:90] GET /healthz: (1.050344ms) 0 [Go-http-client/1.1 127.0.0.1:47566]
I0911 15:44:44.356707  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.356738  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.356771  108799 httplog.go:90] GET /healthz: (1.036007ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.366163  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.239982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.386781  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.432881ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.387058  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0911 15:44:44.405976  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.602258ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.427391  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.760918ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.427969  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0911 15:44:44.445622  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.318623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.448536  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.448563  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.448595  108799 httplog.go:90] GET /healthz: (1.280848ms) 0 [Go-http-client/1.1 127.0.0.1:47566]
I0911 15:44:44.456464  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.456494  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.456543  108799 httplog.go:90] GET /healthz: (1.07161ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.469889  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.469184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.470218  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0911 15:44:44.485704  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.483001ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.507352  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.047473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.507819  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0911 15:44:44.525576  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.372485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.547631  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.99979ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:44.548016  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0911 15:44:44.551540  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.551569  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.551605  108799 httplog.go:90] GET /healthz: (3.707065ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:44.557245  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.557283  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.557326  108799 httplog.go:90] GET /healthz: (1.909165ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.565682  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.41206ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.586529  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.239551ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.586815  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0911 15:44:44.606167  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.735045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.628637  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.252719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.628894  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0911 15:44:44.645540  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.303623ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.650034  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.650064  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.650096  108799 httplog.go:90] GET /healthz: (2.854523ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:44.656671  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.656716  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.656773  108799 httplog.go:90] GET /healthz: (1.275674ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.666860  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.572857ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.667351  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0911 15:44:44.686098  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.795065ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.706495  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.17812ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.707237  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0911 15:44:44.725662  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.34615ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.746637  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.26374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.747118  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0911 15:44:44.748704  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.748728  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.748759  108799 httplog.go:90] GET /healthz: (1.539642ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:44.756519  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.756546  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.756582  108799 httplog.go:90] GET /healthz: (1.104073ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.765727  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.418288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.787348  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.002828ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.787567  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0911 15:44:44.806250  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.741626ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.826500  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.191062ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.826766  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0911 15:44:44.846732  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.312368ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.848315  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.848353  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.848381  108799 httplog.go:90] GET /healthz: (908.505µs) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:44.856324  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.856356  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.856400  108799 httplog.go:90] GET /healthz: (988.491µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.866605  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.280874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.866829  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0911 15:44:44.885838  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.562025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.906545  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.269581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.906929  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0911 15:44:44.926575  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.344448ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.946528  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.279914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.946790  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0911 15:44:44.948268  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.948489  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.948693  108799 httplog.go:90] GET /healthz: (1.347634ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:44.959374  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:44.959414  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:44.959456  108799 httplog.go:90] GET /healthz: (1.128764ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.965506  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.292303ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.986654  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.345411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:44.987115  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0911 15:44:45.005784  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.464584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.026412  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.112901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.026697  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0911 15:44:45.045783  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.500125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.048324  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.048356  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.048385  108799 httplog.go:90] GET /healthz: (1.130492ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:45.056613  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.056643  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.056695  108799 httplog.go:90] GET /healthz: (1.198261ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.066602  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.332581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.067315  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0911 15:44:45.085824  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.523045ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.106423  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.118647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.106823  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0911 15:44:45.125725  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.454645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.146766  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.393318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.147104  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0911 15:44:45.148452  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.148481  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.148512  108799 httplog.go:90] GET /healthz: (1.046313ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:45.157796  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.157827  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.157880  108799 httplog.go:90] GET /healthz: (1.106514ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.165367  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.230898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.186986  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.712449ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.187249  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0911 15:44:45.205482  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.221645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.226431  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.221897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.226679  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0911 15:44:45.245885  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.614201ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.248172  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.248201  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.248237  108799 httplog.go:90] GET /healthz: (1.035686ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:45.256388  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.256425  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.256458  108799 httplog.go:90] GET /healthz: (1.050962ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.266542  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.323347ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.266804  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0911 15:44:45.285932  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.708609ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.306729  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.425365ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.307019  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0911 15:44:45.325909  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.545319ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.346861  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.540074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.347695  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0911 15:44:45.349170  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.349198  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.349233  108799 httplog.go:90] GET /healthz: (909.32µs) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:45.356741  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.357098  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.357329  108799 httplog.go:90] GET /healthz: (1.839589ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.365831  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.51035ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.387892  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.599432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.388523  108799 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0911 15:44:45.405980  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.74498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.408454  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.021995ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.427295  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.92992ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.427822  108799 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0911 15:44:45.455460  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (11.138137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.455650  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.455669  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.455705  108799 httplog.go:90] GET /healthz: (8.335357ms) 0 [Go-http-client/1.1 127.0.0.1:47566]
I0911 15:44:45.457669  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.457699  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.457732  108799 httplog.go:90] GET /healthz: (1.361526ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.457843  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.531972ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.466548  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.317113ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.466802  108799 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 15:44:45.485891  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.597465ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.488161  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.78638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.506643  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.356888ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.506904  108799 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 15:44:45.525750  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.5173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.529265  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.888742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.546675  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.390268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.547451  108799 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 15:44:45.548227  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.548252  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.548298  108799 httplog.go:90] GET /healthz: (976.138µs) 0 [Go-http-client/1.1 127.0.0.1:47566]
I0911 15:44:45.556475  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.556503  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.556567  108799 httplog.go:90] GET /healthz: (1.14229ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.565474  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.235498ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.567119  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.12641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.586826  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.525567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.587184  108799 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 15:44:45.605672  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.361592ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.607470  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.305624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.626862  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.494606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.627167  108799 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 15:44:45.645798  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.426998ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.647646  108799 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.324545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.648553  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.648584  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.648615  108799 httplog.go:90] GET /healthz: (1.071314ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:45.656526  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.656559  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.656592  108799 httplog.go:90] GET /healthz: (1.17907ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.666443  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.182784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.667394  108799 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 15:44:45.686089  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.720875ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.688988  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.736369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.706326  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.090914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.706524  108799 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0911 15:44:45.725643  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.22029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.727761  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.396689ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.746666  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.384866ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.747188  108799 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0911 15:44:45.748567  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.748796  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.749091  108799 httplog.go:90] GET /healthz: (1.650282ms) 0 [Go-http-client/1.1 127.0.0.1:47566]
I0911 15:44:45.756490  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.756516  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.756554  108799 httplog.go:90] GET /healthz: (1.066678ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.765578  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.316359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.767588  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.554797ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.804150  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (19.684438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.804413  108799 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0911 15:44:45.809518  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (4.927253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.813893  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.917834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.828548  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.990133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.828737  108799 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0911 15:44:45.845887  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.719397ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.850011  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.741823ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.851452  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.851475  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.851518  108799 httplog.go:90] GET /healthz: (2.024653ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:45.861031  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.861061  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.861109  108799 httplog.go:90] GET /healthz: (5.765405ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.867119  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.945177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.867507  108799 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0911 15:44:45.885989  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.704228ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.889455  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.007592ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.907740  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.500146ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.907990  108799 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0911 15:44:45.944442  108799 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (13.335885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.946759  108799 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.836706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:45.953295  108799 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0911 15:44:45.953343  108799 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0911 15:44:45.953398  108799 httplog.go:90] GET /healthz: (6.251257ms) 0 [Go-http-client/1.1 127.0.0.1:47394]
I0911 15:44:45.953580  108799 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (6.382398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.953791  108799 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0911 15:44:45.978913  108799 httplog.go:90] GET /healthz: (23.415181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.980619  108799 httplog.go:90] GET /api/v1/namespaces/default: (1.260619ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.983099  108799 httplog.go:90] POST /api/v1/namespaces: (2.076178ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:45.987512  108799 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (4.103423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:46.000633  108799 httplog.go:90] POST /api/v1/namespaces/default/services: (12.722036ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:46.002556  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.205467ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:46.005137  108799 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.149574ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:46.048456  108799 httplog.go:90] GET /healthz: (1.192876ms) 200 [Go-http-client/1.1 127.0.0.1:47566]
W0911 15:44:46.049264  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049289  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049319  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049330  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049341  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049350  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049370  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049380  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049390  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049436  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0911 15:44:46.049447  108799 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0911 15:44:46.049464  108799 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0911 15:44:46.049473  108799 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0911 15:44:46.049978  108799 reflector.go:120] Starting reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.050000  108799 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.050343  108799 reflector.go:120] Starting reflector *v1beta1.CSINode (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.050359  108799 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.050636  108799 reflector.go:120] Starting reflector *v1.PersistentVolume (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.050648  108799 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.051205  108799 reflector.go:120] Starting reflector *v1.ReplicationController (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.051222  108799 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.051315  108799 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (582.419µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:44:46.051630  108799 reflector.go:120] Starting reflector *v1.StatefulSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.051643  108799 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.051924  108799 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (484.593µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:46.053457  108799 reflector.go:120] Starting reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.053480  108799 reflector.go:158] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.053589  108799 get.go:250] Starting watch for /api/v1/nodes, rv=28226 labels= fields= timeout=7m28s
I0911 15:44:46.054105  108799 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (468.359µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47746]
I0911 15:44:46.054688  108799 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=28250 labels= fields= timeout=7m11s
I0911 15:44:46.054751  108799 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=28250 labels= fields= timeout=5m18s
I0911 15:44:46.055080  108799 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.055095  108799 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.055762  108799 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (473.868µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47748]
I0911 15:44:46.056017  108799 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (482.513µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47750]
I0911 15:44:46.056390  108799 get.go:250] Starting watch for /api/v1/pods, rv=28226 labels= fields= timeout=9m58s
I0911 15:44:46.056675  108799 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=28221 labels= fields= timeout=7m39s
I0911 15:44:46.056741  108799 reflector.go:120] Starting reflector *v1.ReplicaSet (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.056759  108799 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.057347  108799 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (4.331186ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:44:46.057548  108799 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (410.628µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47752]
I0911 15:44:46.057976  108799 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=28221 labels= fields= timeout=8m3s
I0911 15:44:46.058158  108799 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=28250 labels= fields= timeout=8m18s
I0911 15:44:46.054509  108799 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.058325  108799 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.058529  108799 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (5.747757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47744]
I0911 15:44:46.059095  108799 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (410.615µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47754]
I0911 15:44:46.059811  108799 reflector.go:120] Starting reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.059828  108799 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.060274  108799 reflector.go:120] Starting reflector *v1.StorageClass (0s) from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.060302  108799 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0911 15:44:46.061125  108799 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (468.429µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47758]
I0911 15:44:46.061322  108799 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (678.921µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47756]
I0911 15:44:46.061712  108799 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=28250 labels= fields= timeout=7m58s
I0911 15:44:46.062091  108799 get.go:250] Starting watch for /api/v1/services, rv=28382 labels= fields= timeout=9m22s
I0911 15:44:46.062795  108799 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=28227 labels= fields= timeout=9m13s
I0911 15:44:46.063108  108799 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=28249 labels= fields= timeout=8m11s
I0911 15:44:46.149932  108799 shared_informer.go:227] caches populated
I0911 15:44:46.250155  108799 shared_informer.go:227] caches populated
I0911 15:44:46.350374  108799 shared_informer.go:227] caches populated
I0911 15:44:46.450521  108799 shared_informer.go:227] caches populated
I0911 15:44:46.550694  108799 shared_informer.go:227] caches populated
I0911 15:44:46.651046  108799 shared_informer.go:227] caches populated
I0911 15:44:46.751234  108799 shared_informer.go:227] caches populated
I0911 15:44:46.851425  108799 shared_informer.go:227] caches populated
I0911 15:44:46.951617  108799 shared_informer.go:227] caches populated
I0911 15:44:47.051830  108799 shared_informer.go:227] caches populated
I0911 15:44:47.152076  108799 shared_informer.go:227] caches populated
I0911 15:44:47.252290  108799 shared_informer.go:227] caches populated
I0911 15:44:47.255724  108799 node_tree.go:93] Added node "test-node-0" in group "" to NodeTree
I0911 15:44:47.256030  108799 httplog.go:90] POST /api/v1/nodes: (2.991644ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.258382  108799 httplog.go:90] POST /api/v1/nodes: (1.907398ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.258726  108799 node_tree.go:93] Added node "test-node-1" in group "" to NodeTree
I0911 15:44:47.260969  108799 scheduling_queue.go:830] About to try and schedule pod bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:44:47.260997  108799 scheduler.go:530] Attempting to schedule pod: bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:44:47.261231  108799 scheduler_binder.go:256] AssumePodVolumes for pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod", node "test-node-0"
I0911 15:44:47.261256  108799 scheduler_binder.go:266] AssumePodVolumes for pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0911 15:44:47.261305  108799 factory.go:606] Attempting to bind test-pod to test-node-0
I0911 15:44:47.263331  108799 httplog.go:90] POST /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods: (4.482411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.263794  108799 httplog.go:90] POST /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod/binding: (1.96409ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:47.264006  108799 scheduler.go:667] pod bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod is bound successfully on node "test-node-0", 2 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0911 15:44:47.266021  108799 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/events: (1.66657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:47.365883  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.800522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:47.367894  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.502447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:47.383400  108799 httplog.go:90] DELETE /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (4.690664ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:47.385998  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.074926ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:47.388230  108799 httplog.go:90] POST /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods: (1.830626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:47.388683  108799 scheduling_queue.go:830] About to try and schedule pod bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:44:47.388698  108799 scheduler.go:530] Attempting to schedule pod: bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:44:47.388926  108799 scheduler_binder.go:256] AssumePodVolumes for pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod", node "test-node-0"
I0911 15:44:47.388964  108799 scheduler_binder.go:266] AssumePodVolumes for pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0911 15:44:47.391206  108799 httplog.go:90] POST /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod/binding: (1.989914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.391407  108799 scheduler.go:667] pod bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod is bound successfully on node "test-node-0", 2 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0911 15:44:47.393559  108799 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/events: (1.696153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.491408  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.730171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.493136  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.234631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.508168  108799 httplog.go:90] DELETE /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (4.397547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.511057  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (953.855µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.514293  108799 httplog.go:90] POST /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods: (2.693336ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.514699  108799 scheduling_queue.go:830] About to try and schedule pod bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:44:47.514714  108799 scheduler.go:530] Attempting to schedule pod: bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:44:47.514970  108799 scheduler_binder.go:256] AssumePodVolumes for pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod", node "test-node-1"
I0911 15:44:47.515001  108799 scheduler_binder.go:266] AssumePodVolumes for pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod", node "test-node-1": all PVCs bound and nothing to do
I0911 15:44:47.517153  108799 httplog.go:90] POST /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod/binding: (1.876766ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.517291  108799 scheduler.go:667] pod bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod is bound successfully on node "test-node-1", 2 nodes evaluated, 2 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0911 15:44:47.519447  108799 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/events: (1.395663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.616618  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.665511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.618437  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.229603ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.635393  108799 httplog.go:90] DELETE /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (6.11005ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.638541  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.092246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.640580  108799 httplog.go:90] POST /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods: (1.581125ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.640694  108799 scheduling_queue.go:830] About to try and schedule pod bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:44:47.640731  108799 scheduler.go:530] Attempting to schedule pod: bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:44:47.640988  108799 scheduler_binder.go:256] AssumePodVolumes for pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod", node "test-node-0"
I0911 15:44:47.641010  108799 scheduler_binder.go:266] AssumePodVolumes for pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod", node "test-node-0": all PVCs bound and nothing to do
E0911 15:44:47.641141  108799 framework.go:457] bind plugin "bind-plugin-1" failed to bind pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod": failed to bind
I0911 15:44:47.641159  108799 scheduler.go:500] Failed to bind pod: bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
E0911 15:44:47.641176  108799 scheduler.go:658] error binding pod: Bind failure, code: 1: bind plugin "bind-plugin-1" failed to bind pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod": failed to bind
E0911 15:44:47.641197  108799 factory.go:557] Error scheduling bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod: Bind failure, code: 1: bind plugin "bind-plugin-1" failed to bind pod "bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod": failed to bind; retrying
I0911 15:44:47.641223  108799 factory.go:615] Updating pod condition for bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0911 15:44:47.649732  108799 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/events: (3.213089ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48002]
I0911 15:44:47.650374  108799 httplog.go:90] PUT /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod/status: (3.961754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47878]
I0911 15:44:47.650688  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (4.381986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:47.653208  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (1.282655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:55.981074  108799 httplog.go:90] GET /api/v1/namespaces/default: (1.516341ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:55.982615  108799 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.127637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:44:55.983929  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (917.104µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:05.981321  108799 httplog.go:90] GET /api/v1/namespaces/default: (1.585897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:05.983175  108799 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.482917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:05.984687  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.196657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:15.981620  108799 httplog.go:90] GET /api/v1/namespaces/default: (1.711688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:15.983264  108799 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.243724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:15.984782  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.100203ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:25.981494  108799 httplog.go:90] GET /api/v1/namespaces/default: (1.223945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:25.983153  108799 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.301734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:25.984475  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.006777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:35.981749  108799 httplog.go:90] GET /api/v1/namespaces/default: (1.552667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:35.983262  108799 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.026955ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:35.984641  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (977.197µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:42.676011  108799 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.418854ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:42.677598  108799 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.092085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:42.679018  108799 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (997.006µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:45.981800  108799 httplog.go:90] GET /api/v1/namespaces/default: (1.426139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:45.984020  108799 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.75213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:45.985958  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.403222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:47.659242  108799 scheduling_queue.go:830] About to try and schedule pod bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:45:47.659290  108799 scheduler.go:526] Skip schedule deleting pod: bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod
I0911 15:45:47.663251  108799 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/events: (3.633991ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:48002]
I0911 15:45:47.664085  108799 httplog.go:90] DELETE /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (9.963846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:47.666459  108799 httplog.go:90] GET /api/v1/namespaces/bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/pods/test-pod: (866.569µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
E0911 15:45:47.666958  108799 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0911 15:45:47.667265  108799 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=28250&timeout=7m11s&timeoutSeconds=431&watch=true: (1m1.612805549s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47746]
I0911 15:45:47.667367  108799 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=28382&timeout=9m22s&timeoutSeconds=562&watch=true: (1m1.605575166s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47756]
I0911 15:45:47.667378  108799 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=28250&timeout=8m18s&timeoutSeconds=498&watch=true: (1m1.609494203s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47752]
I0911 15:45:47.667385  108799 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=28250&timeout=7m58s&timeoutSeconds=478&watch=true: (1m1.605885666s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47758]
I0911 15:45:47.667479  108799 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=28249&timeout=8m11s&timeoutSeconds=491&watch=true: (1m1.608034473s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47754]
I0911 15:45:47.667484  108799 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=28227&timeout=9m13s&timeoutSeconds=553&watch=true: (1m1.608548966s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47744]
I0911 15:45:47.667520  108799 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=28250&timeout=5m18s&timeoutSeconds=318&watch=true: (1m1.614670891s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47394]
I0911 15:45:47.667543  108799 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28226&timeout=7m28s&timeoutSeconds=448&watch=true: (1m1.61428854s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47742]
I0911 15:45:47.667570  108799 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=28221&timeout=7m39s&timeoutSeconds=459&watch=true: (1m1.611195659s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47750]
I0911 15:45:47.667638  108799 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=28226&timeout=9m58s&timeoutSeconds=598&watch=true: (1m1.611533907s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47748]
I0911 15:45:47.667883  108799 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=28221&timeout=8m3s&timeoutSeconds=483&watch=true: (1m1.610201742s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47566]
I0911 15:45:47.672300  108799 httplog.go:90] DELETE /api/v1/nodes: (4.704958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:47.672465  108799 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0911 15:45:47.673546  108799 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (884.525µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
I0911 15:45:47.675292  108799 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.24295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:47880]
--- FAIL: TestBindPlugin (65.25s)
    framework_test.go:1028: test #3: Waiting for invoke event 2 timeout.
    framework_test.go:1028: test #3: Waiting for invoke event 3 timeout.

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190911-153756.xml

Find bind-plugin71203dcf-ffb5-4f4c-b90c-e75c697942e7/test-pod mentions in log files


Show 2862 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 932 lines ...
W0911 15:33:37.094] I0911 15:33:36.970255   52874 replica_set.go:182] Starting replicaset controller
W0911 15:33:37.095] I0911 15:33:36.970269   52874 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
W0911 15:33:37.095] I0911 15:33:36.973699   52874 controllermanager.go:534] Started "horizontalpodautoscaling"
W0911 15:33:37.095] W0911 15:33:36.973767   52874 controllermanager.go:526] Skipping "csrsigning"
W0911 15:33:37.095] I0911 15:33:36.974142   52874 horizontal.go:156] Starting HPA controller
W0911 15:33:37.095] I0911 15:33:36.974178   52874 shared_informer.go:197] Waiting for caches to sync for HPA
W0911 15:33:37.095] E0911 15:33:36.974650   52874 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0911 15:33:37.095] W0911 15:33:36.974753   52874 controllermanager.go:526] Skipping "service"
W0911 15:33:37.096] W0911 15:33:36.975649   52874 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0911 15:33:37.096] I0911 15:33:36.976551   52874 controllermanager.go:534] Started "attachdetach"
W0911 15:33:37.096] I0911 15:33:36.976592   52874 attach_detach_controller.go:334] Starting attach detach controller
W0911 15:33:37.096] I0911 15:33:36.976619   52874 shared_informer.go:197] Waiting for caches to sync for attach detach
W0911 15:33:37.096] I0911 15:33:36.977133   52874 controllermanager.go:534] Started "endpoint"
... skipping 61 lines ...
W0911 15:33:37.654] I0911 15:33:37.590880   52874 garbagecollector.go:130] Starting garbage collector controller
W0911 15:33:37.654] I0911 15:33:37.592166   52874 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0911 15:33:37.654] I0911 15:33:37.592302   52874 graph_builder.go:282] GraphBuilder running
W0911 15:33:37.654] I0911 15:33:37.591988   52874 deployment_controller.go:152] Starting deployment controller
W0911 15:33:37.654] I0911 15:33:37.592437   52874 shared_informer.go:197] Waiting for caches to sync for deployment
W0911 15:33:37.654] I0911 15:33:37.592795   52874 node_lifecycle_controller.go:77] Sending events to api server
W0911 15:33:37.655] E0911 15:33:37.592859   52874 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0911 15:33:37.655] W0911 15:33:37.592870   52874 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0911 15:33:37.655] I0911 15:33:37.593869   52874 controllermanager.go:534] Started "persistentvolume-binder"
W0911 15:33:37.655] I0911 15:33:37.593964   52874 pv_controller_base.go:282] Starting persistent volume controller
W0911 15:33:37.655] I0911 15:33:37.594021   52874 shared_informer.go:197] Waiting for caches to sync for persistent volume
W0911 15:33:37.655] I0911 15:33:37.594246   52874 controllermanager.go:534] Started "podgc"
W0911 15:33:37.655] I0911 15:33:37.594273   52874 gc_controller.go:75] Starting GC controller
... skipping 8 lines ...
W0911 15:33:37.657] I0911 15:33:37.596319   52874 shared_informer.go:197] Waiting for caches to sync for disruption
W0911 15:33:37.657] W0911 15:33:37.596309   52874 controllermanager.go:513] "endpointslice" is disabled
W0911 15:33:37.657] W0911 15:33:37.596332   52874 controllermanager.go:513] "bootstrapsigner" is disabled
W0911 15:33:37.657] I0911 15:33:37.596609   52874 controllermanager.go:534] Started "pvc-protection"
W0911 15:33:37.657] I0911 15:33:37.596620   52874 pvc_protection_controller.go:100] Starting PVC protection controller
W0911 15:33:37.657] I0911 15:33:37.596637   52874 shared_informer.go:197] Waiting for caches to sync for PVC protection
W0911 15:33:37.657] W0911 15:33:37.603646   52874 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0911 15:33:37.658] I0911 15:33:37.657905   52874 shared_informer.go:204] Caches are synced for ReplicationController 
W0911 15:33:37.658] I0911 15:33:37.658364   52874 shared_informer.go:204] Caches are synced for certificate 
W0911 15:33:37.659] I0911 15:33:37.659541   52874 shared_informer.go:204] Caches are synced for expand 
W0911 15:33:37.661] I0911 15:33:37.660793   52874 shared_informer.go:204] Caches are synced for stateful set 
W0911 15:33:37.661] I0911 15:33:37.661006   52874 shared_informer.go:204] Caches are synced for TTL 
W0911 15:33:37.661] I0911 15:33:37.661613   52874 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0911 15:33:37.670] I0911 15:33:37.670323   52874 shared_informer.go:204] Caches are synced for daemon sets 
W0911 15:33:37.671] I0911 15:33:37.670383   52874 shared_informer.go:204] Caches are synced for ReplicaSet 
W0911 15:33:37.674] I0911 15:33:37.674376   52874 shared_informer.go:204] Caches are synced for HPA 
W0911 15:33:37.676] E0911 15:33:37.675601   52874 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0911 15:33:37.678] I0911 15:33:37.677838   52874 shared_informer.go:204] Caches are synced for endpoint 
W0911 15:33:37.684] I0911 15:33:37.683962   52874 shared_informer.go:204] Caches are synced for PV protection 
W0911 15:33:37.693] I0911 15:33:37.692617   52874 shared_informer.go:204] Caches are synced for deployment 
W0911 15:33:37.694] I0911 15:33:37.694226   52874 shared_informer.go:204] Caches are synced for persistent volume 
W0911 15:33:37.695] I0911 15:33:37.695222   52874 shared_informer.go:204] Caches are synced for GC 
W0911 15:33:37.697] I0911 15:33:37.696768   52874 shared_informer.go:204] Caches are synced for PVC protection 
... skipping 74 lines ...
I0911 15:33:40.707] +++ working dir: /go/src/k8s.io/kubernetes
I0911 15:33:40.710] +++ command: run_RESTMapper_evaluation_tests
I0911 15:33:40.719] +++ [0911 15:33:40] Creating namespace namespace-1568216020-14022
I0911 15:33:40.782] namespace/namespace-1568216020-14022 created
I0911 15:33:40.842] Context "test" modified.
I0911 15:33:40.847] +++ [0911 15:33:40] Testing RESTMapper
I0911 15:33:40.931] +++ [0911 15:33:40] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0911 15:33:40.942] +++ exit code: 0
I0911 15:33:41.039] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0911 15:33:41.040] bindings                                                                      true         Binding
I0911 15:33:41.040] componentstatuses                 cs                                          false        ComponentStatus
I0911 15:33:41.040] configmaps                        cm                                          true         ConfigMap
I0911 15:33:41.040] endpoints                         ep                                          true         Endpoints
... skipping 616 lines ...
I0911 15:33:58.195] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0911 15:33:58.276] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0911 15:33:58.343] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0911 15:33:58.422] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0911 15:33:58.586] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:33:58.752] (Bpod/env-test-pod created
W0911 15:33:58.852] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0911 15:33:58.853] error: setting 'all' parameter but found a non empty selector. 
W0911 15:33:58.854] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0911 15:33:58.854] I0911 15:33:57.840711   49323 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0911 15:33:58.855] error: min-available and max-unavailable cannot be both specified
I0911 15:33:58.955] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0911 15:33:58.956] Name:         env-test-pod
I0911 15:33:58.956] Namespace:    test-kubectl-describe-pod
I0911 15:33:58.956] Priority:     0
I0911 15:33:58.956] Node:         <none>
I0911 15:33:58.956] Labels:       <none>
... skipping 174 lines ...
I0911 15:34:11.243] (Bpod/valid-pod patched
I0911 15:34:11.340] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0911 15:34:11.422] (Bpod/valid-pod patched
I0911 15:34:11.525] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0911 15:34:11.713] (Bpod/valid-pod patched
I0911 15:34:11.803] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0911 15:34:11.981] (B+++ [0911 15:34:11] "kubectl patch with resourceVersion 496" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0911 15:34:12.209] pod "valid-pod" deleted
I0911 15:34:12.217] pod/valid-pod replaced
I0911 15:34:12.299] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0911 15:34:12.438] (BSuccessful
I0911 15:34:12.439] message:error: --grace-period must have --force specified
I0911 15:34:12.439] has:\-\-grace-period must have \-\-force specified
I0911 15:34:12.571] Successful
I0911 15:34:12.571] message:error: --timeout must have --force specified
I0911 15:34:12.572] has:\-\-timeout must have \-\-force specified
I0911 15:34:12.716] node/node-v1-test created
W0911 15:34:12.816] W0911 15:34:12.715537   52874 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0911 15:34:12.917] node/node-v1-test replaced
I0911 15:34:12.942] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0911 15:34:13.017] (Bnode "node-v1-test" deleted
I0911 15:34:13.099] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0911 15:34:13.333] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0911 15:34:14.158] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 67 lines ...
I0911 15:34:17.789] (Bpod/test-pod created
W0911 15:34:17.890] I0911 15:34:12.985554   52874 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"b7746052-6406-4205-8890-e2257b40ff1e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
W0911 15:34:17.890] Edit cancelled, no changes made.
W0911 15:34:17.890] Edit cancelled, no changes made.
W0911 15:34:17.890] Edit cancelled, no changes made.
W0911 15:34:17.890] Edit cancelled, no changes made.
W0911 15:34:17.891] error: 'name' already has a value (valid-pod), and --overwrite is false
W0911 15:34:17.891] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0911 15:34:17.891] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0911 15:34:17.986] I0911 15:34:17.985904   52874 event.go:255] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"b7746052-6406-4205-8890-e2257b40ff1e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-v1-test event: Removing Node node-v1-test from Controller
I0911 15:34:18.087] pod "test-pod" deleted
I0911 15:34:18.087] +++ [0911 15:34:17] Creating namespace namespace-1568216057-30250
I0911 15:34:18.087] namespace/namespace-1568216057-30250 created
... skipping 42 lines ...
I0911 15:34:20.777] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0911 15:34:20.780] +++ working dir: /go/src/k8s.io/kubernetes
I0911 15:34:20.782] +++ command: run_kubectl_create_error_tests
I0911 15:34:20.791] +++ [0911 15:34:20] Creating namespace namespace-1568216060-7293
I0911 15:34:20.853] namespace/namespace-1568216060-7293 created
I0911 15:34:20.914] Context "test" modified.
I0911 15:34:20.919] +++ [0911 15:34:20] Testing kubectl create with error
W0911 15:34:21.020] Error: must specify one of -f and -k
W0911 15:34:21.021] 
W0911 15:34:21.021] Create a resource from a file or from stdin.
W0911 15:34:21.021] 
W0911 15:34:21.022]  JSON and YAML formats are accepted.
W0911 15:34:21.022] 
W0911 15:34:21.022] Examples:
... skipping 41 lines ...
W0911 15:34:21.029] 
W0911 15:34:21.029] Usage:
W0911 15:34:21.030]   kubectl create -f FILENAME [options]
W0911 15:34:21.030] 
W0911 15:34:21.030] Use "kubectl <command> --help" for more information about a given command.
W0911 15:34:21.030] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0911 15:34:21.131] +++ [0911 15:34:21] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0911 15:34:21.232] kubectl convert is DEPRECATED and will be removed in a future version.
W0911 15:34:21.232] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0911 15:34:21.333] +++ exit code: 0
I0911 15:34:21.333] Recording: run_kubectl_apply_tests
I0911 15:34:21.333] Running command: run_kubectl_apply_tests
I0911 15:34:21.334] 
... skipping 16 lines ...
I0911 15:34:22.553] apply.sh:276: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0911 15:34:22.619] (Bpod "test-pod" deleted
I0911 15:34:22.808] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0911 15:34:23.044] I0911 15:34:23.043785   49323 client.go:361] parsed scheme: "endpoint"
W0911 15:34:23.045] I0911 15:34:23.043831   49323 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0911 15:34:23.047] I0911 15:34:23.046953   49323 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0911 15:34:23.120] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0911 15:34:23.221] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0911 15:34:23.221] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0911 15:34:23.222] +++ exit code: 0
I0911 15:34:23.242] Recording: run_kubectl_run_tests
I0911 15:34:23.243] Running command: run_kubectl_run_tests
I0911 15:34:23.261] 
... skipping 84 lines ...
I0911 15:34:25.255] Context "test" modified.
I0911 15:34:25.260] +++ [0911 15:34:25] Testing kubectl create filter
I0911 15:34:25.335] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:25.463] (Bpod/selector-test-pod created
I0911 15:34:25.544] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0911 15:34:25.616] (BSuccessful
I0911 15:34:25.616] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0911 15:34:25.616] has:pods "selector-test-pod-dont-apply" not found
I0911 15:34:25.680] pod "selector-test-pod" deleted
I0911 15:34:25.696] +++ exit code: 0
I0911 15:34:25.723] Recording: run_kubectl_apply_deployments_tests
I0911 15:34:25.723] Running command: run_kubectl_apply_deployments_tests
I0911 15:34:25.742] 
... skipping 23 lines ...
I0911 15:34:27.204] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:27.279] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:27.353] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:27.482] (Bdeployment.apps/nginx created
I0911 15:34:27.567] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0911 15:34:31.734] (BSuccessful
I0911 15:34:31.735] message:Error from server (Conflict): error when applying patch:
I0911 15:34:31.736] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568216065-28491\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0911 15:34:31.736] to:
I0911 15:34:31.736] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0911 15:34:31.737] Name: "nginx", Namespace: "namespace-1568216065-28491"
I0911 15:34:31.738] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568216065-28491\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-11T15:34:27Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568216065-28491" "resourceVersion":"588" "selfLink":"/apis/apps/v1/namespaces/namespace-1568216065-28491/deployments/nginx" "uid":"f7dd79c3-a2e9-4519-8c3c-2e0bf04c1233"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-11T15:34:27Z" "lastUpdateTime":"2019-09-11T15:34:27Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-11T15:34:27Z" "lastUpdateTime":"2019-09-11T15:34:27Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0911 15:34:31.739] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0911 15:34:31.739] has:Error from server (Conflict)
W0911 15:34:31.840] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0911 15:34:31.840] I0911 15:34:23.545222   49323 controller.go:606] quota admission added evaluator for: jobs.batch
W0911 15:34:31.841] I0911 15:34:23.559104   52874 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568216063-936", Name:"pi", UID:"305b8265-9ec4-49d2-833c-ec0c5ee20955", APIVersion:"batch/v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-7mqpb
W0911 15:34:31.841] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0911 15:34:31.841] I0911 15:34:24.018265   49323 controller.go:606] quota admission added evaluator for: deployments.apps
W0911 15:34:31.842] I0911 15:34:24.031057   49323 controller.go:606] quota admission added evaluator for: replicasets.apps
... skipping 3 lines ...
W0911 15:34:31.843] I0911 15:34:24.318524   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216063-936", Name:"nginx-apps", UID:"5d5cfd44-0180-4ba0-b674-b17e9d3fd8fe", APIVersion:"apps/v1", ResourceVersion:"525", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-apps-f88d5cfc9 to 1
W0911 15:34:31.844] I0911 15:34:24.321204   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216063-936", Name:"nginx-apps-f88d5cfc9", UID:"1ff8ad21-9349-4668-a0c0-3112755431c4", APIVersion:"apps/v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-apps-f88d5cfc9-ks586
W0911 15:34:31.844] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0911 15:34:31.844] I0911 15:34:24.686342   49323 controller.go:606] quota admission added evaluator for: cronjobs.batch
W0911 15:34:31.845] I0911 15:34:26.256474   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216065-28491", Name:"my-depl", UID:"36574aa5-9d9f-4461-8de9-ea62c98656cb", APIVersion:"apps/v1", ResourceVersion:"550", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-64b97f7d4d to 1
W0911 15:34:31.845] I0911 15:34:26.260133   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216065-28491", Name:"my-depl-64b97f7d4d", UID:"cce931a7-a991-471e-8c04-d3e811164da7", APIVersion:"apps/v1", ResourceVersion:"551", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-64b97f7d4d-ckr4d
W0911 15:34:31.846] E0911 15:34:27.050400   52874 replica_set.go:450] Sync "namespace-1568216065-28491/my-depl-64b97f7d4d" failed with replicasets.apps "my-depl-64b97f7d4d" not found
W0911 15:34:31.846] I0911 15:34:27.485555   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216065-28491", Name:"nginx", UID:"f7dd79c3-a2e9-4519-8c3c-2e0bf04c1233", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0911 15:34:31.846] I0911 15:34:27.489991   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216065-28491", Name:"nginx-8484dd655", UID:"54f29c48-a1cc-4192-873e-54f76f3e37d1", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-pgs7x
W0911 15:34:31.847] I0911 15:34:27.491698   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216065-28491", Name:"nginx-8484dd655", UID:"54f29c48-a1cc-4192-873e-54f76f3e37d1", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-bqv44
W0911 15:34:31.847] I0911 15:34:27.493227   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216065-28491", Name:"nginx-8484dd655", UID:"54f29c48-a1cc-4192-873e-54f76f3e37d1", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-rrgrs
W0911 15:34:35.294] I0911 15:34:35.293961   52874 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568216058-23653
I0911 15:34:36.904] deployment.apps/nginx configured
... skipping 146 lines ...
I0911 15:34:43.902] +++ [0911 15:34:43] Creating namespace namespace-1568216083-30496
I0911 15:34:43.966] namespace/namespace-1568216083-30496 created
I0911 15:34:44.029] Context "test" modified.
I0911 15:34:44.034] +++ [0911 15:34:44] Testing kubectl get
I0911 15:34:44.107] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:44.181] (BSuccessful
I0911 15:34:44.181] message:Error from server (NotFound): pods "abc" not found
I0911 15:34:44.182] has:pods "abc" not found
I0911 15:34:44.253] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:44.326] (BSuccessful
I0911 15:34:44.326] message:Error from server (NotFound): pods "abc" not found
I0911 15:34:44.327] has:pods "abc" not found
I0911 15:34:44.401] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:44.470] (BSuccessful
I0911 15:34:44.470] message:{
I0911 15:34:44.470]     "apiVersion": "v1",
I0911 15:34:44.470]     "items": [],
... skipping 23 lines ...
I0911 15:34:44.751] has not:No resources found
I0911 15:34:44.822] Successful
I0911 15:34:44.822] message:NAME
I0911 15:34:44.822] has not:No resources found
I0911 15:34:44.894] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:44.976] (BSuccessful
I0911 15:34:44.976] message:error: the server doesn't have a resource type "foobar"
I0911 15:34:44.977] has not:No resources found
I0911 15:34:45.049] Successful
I0911 15:34:45.050] message:No resources found in namespace-1568216083-30496 namespace.
I0911 15:34:45.050] has:No resources found
I0911 15:34:45.120] Successful
I0911 15:34:45.120] message:
I0911 15:34:45.120] has not:No resources found
I0911 15:34:45.191] Successful
I0911 15:34:45.192] message:No resources found in namespace-1568216083-30496 namespace.
I0911 15:34:45.192] has:No resources found
I0911 15:34:45.267] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:45.339] (BSuccessful
I0911 15:34:45.339] message:Error from server (NotFound): pods "abc" not found
I0911 15:34:45.339] has:pods "abc" not found
I0911 15:34:45.340] FAIL!
I0911 15:34:45.341] message:Error from server (NotFound): pods "abc" not found
I0911 15:34:45.341] has not:List
I0911 15:34:45.341] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0911 15:34:45.433] Successful
I0911 15:34:45.433] message:I0911 15:34:45.396799   62836 loader.go:375] Config loaded from file:  /tmp/tmp.rT4f6UCVml/.kube/config
I0911 15:34:45.433] I0911 15:34:45.398214   62836 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0911 15:34:45.434] I0911 15:34:45.414806   62836 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I0911 15:34:50.895] Successful
I0911 15:34:50.896] message:NAME    DATA   AGE
I0911 15:34:50.896] one     0      0s
I0911 15:34:50.896] three   0      0s
I0911 15:34:50.896] two     0      0s
I0911 15:34:50.896] STATUS    REASON          MESSAGE
I0911 15:34:50.896] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0911 15:34:50.896] has not:watch is only supported on individual resources
I0911 15:34:51.988] Successful
I0911 15:34:51.989] message:STATUS    REASON          MESSAGE
I0911 15:34:51.989] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0911 15:34:51.989] has not:watch is only supported on individual resources
I0911 15:34:51.993] +++ [0911 15:34:51] Creating namespace namespace-1568216091-22261
I0911 15:34:52.055] namespace/namespace-1568216091-22261 created
I0911 15:34:52.126] Context "test" modified.
I0911 15:34:52.226] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:52.373] (Bpod/valid-pod created
... skipping 56 lines ...
I0911 15:34:52.476] }
I0911 15:34:52.558] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0911 15:34:52.805] (B<no value>Successful
I0911 15:34:52.805] message:valid-pod:
I0911 15:34:52.805] has:valid-pod:
I0911 15:34:52.896] Successful
I0911 15:34:52.897] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0911 15:34:52.897] 	template was:
I0911 15:34:52.897] 		{.missing}
I0911 15:34:52.897] 	object given to jsonpath engine was:
I0911 15:34:52.898] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-11T15:34:52Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568216091-22261", "resourceVersion":"691", "selfLink":"/api/v1/namespaces/namespace-1568216091-22261/pods/valid-pod", "uid":"289610c2-203c-473e-ba0d-42d6e7a0e5b9"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0911 15:34:52.899] has:missing is not found
I0911 15:34:52.990] Successful
I0911 15:34:52.991] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0911 15:34:52.991] 	template was:
I0911 15:34:52.992] 		{{.missing}}
I0911 15:34:52.992] 	raw data was:
I0911 15:34:52.992] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-11T15:34:52Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568216091-22261","resourceVersion":"691","selfLink":"/api/v1/namespaces/namespace-1568216091-22261/pods/valid-pod","uid":"289610c2-203c-473e-ba0d-42d6e7a0e5b9"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0911 15:34:52.993] 	object given to template engine was:
I0911 15:34:52.993] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-11T15:34:52Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568216091-22261 resourceVersion:691 selfLink:/api/v1/namespaces/namespace-1568216091-22261/pods/valid-pod uid:289610c2-203c-473e-ba0d-42d6e7a0e5b9] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0911 15:34:52.994] has:map has no entry for key "missing"
W0911 15:34:53.094] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0911 15:34:54.064] Successful
I0911 15:34:54.065] message:NAME        READY   STATUS    RESTARTS   AGE
I0911 15:34:54.065] valid-pod   0/1     Pending   0          1s
I0911 15:34:54.065] STATUS      REASON          MESSAGE
I0911 15:34:54.065] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0911 15:34:54.065] has:STATUS
I0911 15:34:54.066] Successful
I0911 15:34:54.066] message:NAME        READY   STATUS    RESTARTS   AGE
I0911 15:34:54.067] valid-pod   0/1     Pending   0          1s
I0911 15:34:54.067] STATUS      REASON          MESSAGE
I0911 15:34:54.067] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0911 15:34:54.067] has:valid-pod
I0911 15:34:55.150] Successful
I0911 15:34:55.151] message:pod/valid-pod
I0911 15:34:55.151] has not:STATUS
I0911 15:34:55.152] Successful
I0911 15:34:55.152] message:pod/valid-pod
... skipping 72 lines ...
I0911 15:34:56.234] status:
I0911 15:34:56.234]   phase: Pending
I0911 15:34:56.234]   qosClass: Guaranteed
I0911 15:34:56.234] ---
I0911 15:34:56.235] has:name: valid-pod
I0911 15:34:56.315] Successful
I0911 15:34:56.315] message:Error from server (NotFound): pods "invalid-pod" not found
I0911 15:34:56.316] has:"invalid-pod" not found
I0911 15:34:56.385] pod "valid-pod" deleted
I0911 15:34:56.465] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:34:56.617] (Bpod/redis-master created
I0911 15:34:56.619] pod/valid-pod created
I0911 15:34:56.701] Successful
... skipping 31 lines ...
I0911 15:34:57.717] +++ command: run_kubectl_exec_pod_tests
I0911 15:34:57.727] +++ [0911 15:34:57] Creating namespace namespace-1568216097-13648
I0911 15:34:57.790] namespace/namespace-1568216097-13648 created
I0911 15:34:57.852] Context "test" modified.
I0911 15:34:57.857] +++ [0911 15:34:57] Testing kubectl exec POD COMMAND
I0911 15:34:57.932] Successful
I0911 15:34:57.933] message:Error from server (NotFound): pods "abc" not found
I0911 15:34:57.933] has:pods "abc" not found
W0911 15:34:58.034] I0911 15:34:57.190541   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216091-22261", Name:"test-the-deployment", UID:"861d3985-3cc1-4ba7-a691-3e2f9544be01", APIVersion:"apps/v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-the-deployment-69fdbb5f7d to 3
W0911 15:34:58.035] I0911 15:34:57.194234   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216091-22261", Name:"test-the-deployment-69fdbb5f7d", UID:"6a56f528-1ef2-4c22-a4f7-9b972b300a74", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-gcls9
W0911 15:34:58.036] I0911 15:34:57.197010   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216091-22261", Name:"test-the-deployment-69fdbb5f7d", UID:"6a56f528-1ef2-4c22-a4f7-9b972b300a74", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-qbrsq
W0911 15:34:58.036] I0911 15:34:57.197301   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216091-22261", Name:"test-the-deployment-69fdbb5f7d", UID:"6a56f528-1ef2-4c22-a4f7-9b972b300a74", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-r6rnj
I0911 15:34:58.137] pod/test-pod created
I0911 15:34:58.161] Successful
I0911 15:34:58.161] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0911 15:34:58.161] has not:pods "test-pod" not found
I0911 15:34:58.162] Successful
I0911 15:34:58.163] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0911 15:34:58.163] has not:pod or type/name must be specified
I0911 15:34:58.234] pod "test-pod" deleted
I0911 15:34:58.251] +++ exit code: 0
I0911 15:34:58.281] Recording: run_kubectl_exec_resource_name_tests
I0911 15:34:58.281] Running command: run_kubectl_exec_resource_name_tests
I0911 15:34:58.300] 
... skipping 2 lines ...
I0911 15:34:58.306] +++ command: run_kubectl_exec_resource_name_tests
I0911 15:34:58.315] +++ [0911 15:34:58] Creating namespace namespace-1568216098-18874
I0911 15:34:58.382] namespace/namespace-1568216098-18874 created
I0911 15:34:58.443] Context "test" modified.
I0911 15:34:58.448] +++ [0911 15:34:58] Testing kubectl exec TYPE/NAME COMMAND
I0911 15:34:58.534] Successful
I0911 15:34:58.535] message:error: the server doesn't have a resource type "foo"
I0911 15:34:58.535] has:error:
I0911 15:34:58.605] Successful
I0911 15:34:58.606] message:Error from server (NotFound): deployments.apps "bar" not found
I0911 15:34:58.606] has:"bar" not found
I0911 15:34:58.735] pod/test-pod created
I0911 15:34:58.868] replicaset.apps/frontend created
W0911 15:34:58.969] I0911 15:34:58.871481   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216098-18874", Name:"frontend", UID:"eb6e8fe1-e8e0-4de8-81a4-fce57c406b14", APIVersion:"apps/v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6w7v6
W0911 15:34:58.969] I0911 15:34:58.873952   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216098-18874", Name:"frontend", UID:"eb6e8fe1-e8e0-4de8-81a4-fce57c406b14", APIVersion:"apps/v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-srm6v
W0911 15:34:58.970] I0911 15:34:58.874158   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216098-18874", Name:"frontend", UID:"eb6e8fe1-e8e0-4de8-81a4-fce57c406b14", APIVersion:"apps/v1", ResourceVersion:"743", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qdb6s
I0911 15:34:59.070] configmap/test-set-env-config created
I0911 15:34:59.104] Successful
I0911 15:34:59.104] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0911 15:34:59.104] has:not implemented
I0911 15:34:59.190] Successful
I0911 15:34:59.190] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0911 15:34:59.190] has not:not found
I0911 15:34:59.192] Successful
I0911 15:34:59.192] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0911 15:34:59.192] has not:pod or type/name must be specified
I0911 15:34:59.292] Successful
I0911 15:34:59.292] message:Error from server (BadRequest): pod frontend-6w7v6 does not have a host assigned
I0911 15:34:59.293] has not:not found
I0911 15:34:59.294] Successful
I0911 15:34:59.294] message:Error from server (BadRequest): pod frontend-6w7v6 does not have a host assigned
I0911 15:34:59.294] has not:pod or type/name must be specified
I0911 15:34:59.367] pod "test-pod" deleted
I0911 15:34:59.459] replicaset.apps "frontend" deleted
I0911 15:34:59.552] configmap "test-set-env-config" deleted
I0911 15:34:59.570] +++ exit code: 0
I0911 15:34:59.599] Recording: run_create_secret_tests
I0911 15:34:59.599] Running command: run_create_secret_tests
I0911 15:34:59.616] 
I0911 15:34:59.619] +++ Running case: test-cmd.run_create_secret_tests 
I0911 15:34:59.621] +++ working dir: /go/src/k8s.io/kubernetes
I0911 15:34:59.623] +++ command: run_create_secret_tests
I0911 15:34:59.711] Successful
I0911 15:34:59.711] message:Error from server (NotFound): secrets "mysecret" not found
I0911 15:34:59.711] has:secrets "mysecret" not found
I0911 15:34:59.853] Successful
I0911 15:34:59.854] message:Error from server (NotFound): secrets "mysecret" not found
I0911 15:34:59.854] has:secrets "mysecret" not found
I0911 15:34:59.855] Successful
I0911 15:34:59.855] message:user-specified
I0911 15:34:59.855] has:user-specified
I0911 15:34:59.933] Successful
I0911 15:35:00.002] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"1ca45e6a-53b9-43ec-90ca-75a00a51e597","resourceVersion":"765","creationTimestamp":"2019-09-11T15:34:59Z"}}
... skipping 2 lines ...
I0911 15:35:00.150] has:uid
I0911 15:35:00.222] Successful
I0911 15:35:00.222] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"1ca45e6a-53b9-43ec-90ca-75a00a51e597","resourceVersion":"766","creationTimestamp":"2019-09-11T15:34:59Z"},"data":{"key1":"config1"}}
I0911 15:35:00.222] has:config1
I0911 15:35:00.283] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"1ca45e6a-53b9-43ec-90ca-75a00a51e597"}}
I0911 15:35:00.363] Successful
I0911 15:35:00.363] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0911 15:35:00.363] has:configmaps "tester-update-cm" not found
I0911 15:35:00.375] +++ exit code: 0
I0911 15:35:00.405] Recording: run_kubectl_create_kustomization_directory_tests
I0911 15:35:00.405] Running command: run_kubectl_create_kustomization_directory_tests
I0911 15:35:00.424] 
I0911 15:35:00.426] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
W0911 15:35:02.751] I0911 15:35:00.812527   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216098-18874", Name:"test-the-deployment-69fdbb5f7d", UID:"6209050b-c02d-44cb-b228-d19d2818534e", APIVersion:"apps/v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-lfcvn
W0911 15:35:02.752] I0911 15:35:00.818351   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216098-18874", Name:"test-the-deployment-69fdbb5f7d", UID:"6209050b-c02d-44cb-b228-d19d2818534e", APIVersion:"apps/v1", ResourceVersion:"774", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-drl9p
I0911 15:35:03.724] Successful
I0911 15:35:03.724] message:NAME        READY   STATUS    RESTARTS   AGE
I0911 15:35:03.724] valid-pod   0/1     Pending   0          0s
I0911 15:35:03.725] STATUS      REASON          MESSAGE
I0911 15:35:03.725] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0911 15:35:03.725] has:Timeout exceeded while reading body
I0911 15:35:03.797] Successful
I0911 15:35:03.797] message:NAME        READY   STATUS    RESTARTS   AGE
I0911 15:35:03.798] valid-pod   0/1     Pending   0          1s
I0911 15:35:03.798] has:valid-pod
I0911 15:35:03.860] Successful
I0911 15:35:03.861] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0911 15:35:03.861] has:Invalid timeout value
I0911 15:35:03.932] pod "valid-pod" deleted
I0911 15:35:03.948] +++ exit code: 0
I0911 15:35:04.202] Recording: run_crd_tests
I0911 15:35:04.202] Running command: run_crd_tests
I0911 15:35:04.222] 
... skipping 166 lines ...
W0911 15:35:09.734] I0911 15:35:09.686603   52874 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for validfoos.company.com
W0911 15:35:09.735] I0911 15:35:09.686870   52874 shared_informer.go:197] Waiting for caches to sync for resource quota
W0911 15:35:09.735] I0911 15:35:09.720768   49323 client.go:361] parsed scheme: "endpoint"
W0911 15:35:09.736] I0911 15:35:09.720829   49323 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0911 15:35:09.788] I0911 15:35:09.788088   52874 shared_informer.go:204] Caches are synced for resource quota 
I0911 15:35:09.889] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0911 15:35:09.949] (B+++ [0911 15:35:09] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0911 15:35:10.041] {
I0911 15:35:10.041]     "apiVersion": "company.com/v1",
I0911 15:35:10.041]     "kind": "Foo",
I0911 15:35:10.041]     "metadata": {
I0911 15:35:10.041]         "annotations": {
I0911 15:35:10.042]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 183 lines ...
I0911 15:35:18.828] bar.company.com/test created
I0911 15:35:18.946] crd.sh:455: Successful get bars {{len .items}}: 1
I0911 15:35:19.054] (Bnamespace "non-native-resources" deleted
I0911 15:35:24.235] crd.sh:458: Successful get bars {{len .items}}: 0
I0911 15:35:24.401] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0911 15:35:24.496] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
W0911 15:35:24.597] Error from server (NotFound): namespaces "non-native-resources" not found
I0911 15:35:24.699] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0911 15:35:24.700] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0911 15:35:24.723] +++ exit code: 0
I0911 15:35:24.840] Recording: run_cmd_with_img_tests
I0911 15:35:24.841] Running command: run_cmd_with_img_tests
I0911 15:35:24.863] 
... skipping 6 lines ...
I0911 15:35:25.022] +++ [0911 15:35:25] Testing cmd with image
I0911 15:35:25.119] Successful
I0911 15:35:25.120] message:deployment.apps/test1 created
I0911 15:35:25.120] has:deployment.apps/test1 created
I0911 15:35:25.193] deployment.apps "test1" deleted
I0911 15:35:25.265] Successful
I0911 15:35:25.266] message:error: Invalid image name "InvalidImageName": invalid reference format
I0911 15:35:25.266] has:error: Invalid image name "InvalidImageName": invalid reference format
I0911 15:35:25.277] +++ exit code: 0
W0911 15:35:25.378] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0911 15:35:25.379] I0911 15:35:25.107293   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216124-10002", Name:"test1", UID:"817af98e-8e4d-4259-8e04-576426067e8d", APIVersion:"apps/v1", ResourceVersion:"912", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-6cdffdb5b8 to 1
W0911 15:35:25.379] I0911 15:35:25.118272   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216124-10002", Name:"test1-6cdffdb5b8", UID:"5ab301a3-c1d3-41aa-8577-e87604059c90", APIVersion:"apps/v1", ResourceVersion:"913", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-585vk
W0911 15:35:25.409] W0911 15:35:25.408887   49323 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0911 15:35:25.411] E0911 15:35:25.410472   52874 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:25.504] W0911 15:35:25.503605   49323 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0911 15:35:25.505] E0911 15:35:25.504843   52874 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:35:25.606] +++ [0911 15:35:25] Testing recursive resources
I0911 15:35:25.606] +++ [0911 15:35:25] Creating namespace namespace-1568216125-11045
I0911 15:35:25.606] namespace/namespace-1568216125-11045 created
I0911 15:35:25.660] Context "test" modified.
I0911 15:35:25.745] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:26.001] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:26.004] (BSuccessful
I0911 15:35:26.004] message:pod/busybox0 created
I0911 15:35:26.004] pod/busybox1 created
I0911 15:35:26.004] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0911 15:35:26.005] has:error validating data: kind not set
I0911 15:35:26.083] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:26.240] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0911 15:35:26.242] (BSuccessful
I0911 15:35:26.242] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:26.243] has:Object 'Kind' is missing
I0911 15:35:26.321] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:26.586] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0911 15:35:26.588] (BSuccessful
I0911 15:35:26.588] message:pod/busybox0 replaced
I0911 15:35:26.588] pod/busybox1 replaced
I0911 15:35:26.588] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0911 15:35:26.588] has:error validating data: kind not set
I0911 15:35:26.664] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:26.748] (BSuccessful
I0911 15:35:26.748] message:Name:         busybox0
I0911 15:35:26.749] Namespace:    namespace-1568216125-11045
I0911 15:35:26.749] Priority:     0
I0911 15:35:26.749] Node:         <none>
... skipping 159 lines ...
I0911 15:35:26.775] has:Object 'Kind' is missing
I0911 15:35:26.828] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:26.982] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0911 15:35:26.984] (BSuccessful
I0911 15:35:26.985] message:pod/busybox0 annotated
I0911 15:35:26.985] pod/busybox1 annotated
I0911 15:35:26.986] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:26.986] has:Object 'Kind' is missing
I0911 15:35:27.066] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:27.304] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0911 15:35:27.306] (BSuccessful
I0911 15:35:27.306] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0911 15:35:27.307] pod/busybox0 configured
I0911 15:35:27.307] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0911 15:35:27.307] pod/busybox1 configured
I0911 15:35:27.308] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0911 15:35:27.308] has:error validating data: kind not set
I0911 15:35:27.387] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:27.544] (Bdeployment.apps/nginx created
I0911 15:35:27.638] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0911 15:35:27.716] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:35:27.868] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0911 15:35:27.870] (BSuccessful
... skipping 42 lines ...
I0911 15:35:27.942] deployment.apps "nginx" deleted
I0911 15:35:28.025] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:28.176] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:28.177] (BSuccessful
I0911 15:35:28.177] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0911 15:35:28.178] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0911 15:35:28.178] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:28.178] has:Object 'Kind' is missing
I0911 15:35:28.258] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:28.336] (BSuccessful
I0911 15:35:28.336] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:28.337] has:busybox0:busybox1:
I0911 15:35:28.339] Successful
I0911 15:35:28.339] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:28.339] has:Object 'Kind' is missing
I0911 15:35:28.416] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:28.494] (Bpod/busybox0 labeled
I0911 15:35:28.494] pod/busybox1 labeled
I0911 15:35:28.494] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:28.577] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0911 15:35:28.579] (BSuccessful
I0911 15:35:28.579] message:pod/busybox0 labeled
I0911 15:35:28.580] pod/busybox1 labeled
I0911 15:35:28.580] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:28.580] has:Object 'Kind' is missing
I0911 15:35:28.658] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:28.733] (Bpod/busybox0 patched
I0911 15:35:28.734] pod/busybox1 patched
I0911 15:35:28.734] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:28.815] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0911 15:35:28.817] (BSuccessful
I0911 15:35:28.817] message:pod/busybox0 patched
I0911 15:35:28.818] pod/busybox1 patched
I0911 15:35:28.818] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:28.818] has:Object 'Kind' is missing
I0911 15:35:28.897] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:29.069] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:29.072] (BSuccessful
I0911 15:35:29.072] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0911 15:35:29.072] pod "busybox0" force deleted
I0911 15:35:29.072] pod "busybox1" force deleted
I0911 15:35:29.073] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0911 15:35:29.073] has:Object 'Kind' is missing
I0911 15:35:29.169] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:29.322] (Breplicationcontroller/busybox0 created
I0911 15:35:29.325] replicationcontroller/busybox1 created
I0911 15:35:29.429] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:29.514] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:29.600] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0911 15:35:29.682] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0911 15:35:29.840] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0911 15:35:29.931] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0911 15:35:29.932] (BSuccessful
I0911 15:35:29.933] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0911 15:35:29.933] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0911 15:35:29.934] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:29.934] has:Object 'Kind' is missing
I0911 15:35:30.003] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0911 15:35:30.078] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0911 15:35:30.164] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:30.249] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0911 15:35:30.328] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0911 15:35:30.497] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0911 15:35:30.579] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0911 15:35:30.581] (BSuccessful
I0911 15:35:30.581] message:service/busybox0 exposed
I0911 15:35:30.582] service/busybox1 exposed
I0911 15:35:30.582] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:30.582] has:Object 'Kind' is missing
I0911 15:35:30.665] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:30.746] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0911 15:35:30.827] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0911 15:35:31.003] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0911 15:35:31.086] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0911 15:35:31.088] (BSuccessful
I0911 15:35:31.088] message:replicationcontroller/busybox0 scaled
I0911 15:35:31.088] replicationcontroller/busybox1 scaled
I0911 15:35:31.089] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:31.089] has:Object 'Kind' is missing
I0911 15:35:31.170] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:31.329] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:31.331] (BSuccessful
I0911 15:35:31.331] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0911 15:35:31.331] replicationcontroller "busybox0" force deleted
I0911 15:35:31.332] replicationcontroller "busybox1" force deleted
I0911 15:35:31.332] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:31.332] has:Object 'Kind' is missing
I0911 15:35:31.412] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:31.560] (Bdeployment.apps/nginx1-deployment created
I0911 15:35:31.564] deployment.apps/nginx0-deployment created
W0911 15:35:31.665] W0911 15:35:25.607126   49323 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0911 15:35:31.666] E0911 15:35:25.608321   52874 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.666] W0911 15:35:25.705386   49323 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0911 15:35:31.667] E0911 15:35:25.706893   52874 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.667] E0911 15:35:26.411962   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.667] E0911 15:35:26.505737   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.668] E0911 15:35:26.609859   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.668] E0911 15:35:26.708197   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.669] E0911 15:35:27.413695   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.669] E0911 15:35:27.507355   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.670] I0911 15:35:27.547978   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216125-11045", Name:"nginx", UID:"7fdf0e37-d4c5-487a-8451-149357334786", APIVersion:"apps/v1", ResourceVersion:"937", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0911 15:35:31.671] I0911 15:35:27.550932   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216125-11045", Name:"nginx-f87d999f7", UID:"d9c302bf-5b31-4a8e-b006-acd1c8258c4b", APIVersion:"apps/v1", ResourceVersion:"938", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-g98nx
W0911 15:35:31.671] I0911 15:35:27.553270   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216125-11045", Name:"nginx-f87d999f7", UID:"d9c302bf-5b31-4a8e-b006-acd1c8258c4b", APIVersion:"apps/v1", ResourceVersion:"938", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-wq5hj
W0911 15:35:31.672] I0911 15:35:27.554245   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216125-11045", Name:"nginx-f87d999f7", UID:"d9c302bf-5b31-4a8e-b006-acd1c8258c4b", APIVersion:"apps/v1", ResourceVersion:"938", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-d6wvh
W0911 15:35:31.673] E0911 15:35:27.611095   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.673] E0911 15:35:27.709316   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.673] kubectl convert is DEPRECATED and will be removed in a future version.
W0911 15:35:31.673] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0911 15:35:31.674] E0911 15:35:28.414908   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.674] E0911 15:35:28.508366   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.674] E0911 15:35:28.612410   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.675] E0911 15:35:28.710557   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.675] I0911 15:35:29.141367   52874 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0911 15:35:31.675] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0911 15:35:31.676] I0911 15:35:29.326180   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216125-11045", Name:"busybox0", UID:"55c7065b-4eb7-4a58-ab87-afd669ede3cc", APIVersion:"v1", ResourceVersion:"969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qswvv
W0911 15:35:31.676] I0911 15:35:29.329378   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216125-11045", Name:"busybox1", UID:"37e7445d-bd83-481f-a9f4-2fd37d9138d6", APIVersion:"v1", ResourceVersion:"971", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-q69n8
W0911 15:35:31.676] E0911 15:35:29.416197   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.677] E0911 15:35:29.509430   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.677] E0911 15:35:29.613732   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.677] E0911 15:35:29.711741   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.678] E0911 15:35:30.417450   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.678] E0911 15:35:30.510537   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.678] E0911 15:35:30.615393   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.679] E0911 15:35:30.712923   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.679] I0911 15:35:30.914377   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216125-11045", Name:"busybox0", UID:"55c7065b-4eb7-4a58-ab87-afd669ede3cc", APIVersion:"v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-n4tq6
W0911 15:35:31.679] I0911 15:35:30.921459   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216125-11045", Name:"busybox1", UID:"37e7445d-bd83-481f-a9f4-2fd37d9138d6", APIVersion:"v1", ResourceVersion:"993", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-ct7xg
W0911 15:35:31.680] E0911 15:35:31.418556   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.680] E0911 15:35:31.511779   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.680] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0911 15:35:31.681] I0911 15:35:31.564270   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216125-11045", Name:"nginx1-deployment", UID:"c301e81e-b7f6-45d9-9634-03fdf5a1e452", APIVersion:"apps/v1", ResourceVersion:"1010", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W0911 15:35:31.681] I0911 15:35:31.567144   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216125-11045", Name:"nginx1-deployment-7bdbbfb5cf", UID:"a95d7cb6-edd4-4fe7-ac79-95363157bf92", APIVersion:"apps/v1", ResourceVersion:"1011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-bpb4l
W0911 15:35:31.682] I0911 15:35:31.569540   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216125-11045", Name:"nginx1-deployment-7bdbbfb5cf", UID:"a95d7cb6-edd4-4fe7-ac79-95363157bf92", APIVersion:"apps/v1", ResourceVersion:"1011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-n8p8s
W0911 15:35:31.682] I0911 15:35:31.569680   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216125-11045", Name:"nginx0-deployment", UID:"89c2edc9-9682-45a5-9efb-04f0419f8f19", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W0911 15:35:31.682] I0911 15:35:31.574605   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216125-11045", Name:"nginx0-deployment-57c6bff7f6", UID:"8ee206b3-bc49-4cc4-9fe4-2a991dfab55c", APIVersion:"apps/v1", ResourceVersion:"1016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-x2pg4
W0911 15:35:31.683] I0911 15:35:31.578067   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216125-11045", Name:"nginx0-deployment-57c6bff7f6", UID:"8ee206b3-bc49-4cc4-9fe4-2a991dfab55c", APIVersion:"apps/v1", ResourceVersion:"1016", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-z2tgg
W0911 15:35:31.683] E0911 15:35:31.616973   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:31.714] E0911 15:35:31.714243   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:35:31.815] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0911 15:35:31.816] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0911 15:35:31.927] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0911 15:35:31.930] (BSuccessful
I0911 15:35:31.930] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0911 15:35:31.931] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0911 15:35:31.931] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0911 15:35:31.932] has:Object 'Kind' is missing
I0911 15:35:32.011] deployment.apps/nginx1-deployment paused
I0911 15:35:32.015] deployment.apps/nginx0-deployment paused
I0911 15:35:32.105] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0911 15:35:32.107] (BSuccessful
I0911 15:35:32.107] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0911 15:35:32.379] 1         <none>
I0911 15:35:32.379] 
I0911 15:35:32.379] deployment.apps/nginx0-deployment 
I0911 15:35:32.379] REVISION  CHANGE-CAUSE
I0911 15:35:32.379] 1         <none>
I0911 15:35:32.379] 
I0911 15:35:32.380] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0911 15:35:32.380] has:nginx0-deployment
I0911 15:35:32.382] Successful
I0911 15:35:32.383] message:deployment.apps/nginx1-deployment 
I0911 15:35:32.383] REVISION  CHANGE-CAUSE
I0911 15:35:32.383] 1         <none>
I0911 15:35:32.383] 
I0911 15:35:32.384] deployment.apps/nginx0-deployment 
I0911 15:35:32.384] REVISION  CHANGE-CAUSE
I0911 15:35:32.384] 1         <none>
I0911 15:35:32.384] 
I0911 15:35:32.385] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0911 15:35:32.385] has:nginx1-deployment
I0911 15:35:32.385] Successful
I0911 15:35:32.385] message:deployment.apps/nginx1-deployment 
I0911 15:35:32.386] REVISION  CHANGE-CAUSE
I0911 15:35:32.386] 1         <none>
I0911 15:35:32.386] 
I0911 15:35:32.386] deployment.apps/nginx0-deployment 
I0911 15:35:32.386] REVISION  CHANGE-CAUSE
I0911 15:35:32.387] 1         <none>
I0911 15:35:32.387] 
I0911 15:35:32.387] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0911 15:35:32.388] has:Object 'Kind' is missing
I0911 15:35:32.454] deployment.apps "nginx1-deployment" force deleted
I0911 15:35:32.459] deployment.apps "nginx0-deployment" force deleted
W0911 15:35:32.560] E0911 15:35:32.419983   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:32.561] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0911 15:35:32.561] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0911 15:35:32.562] E0911 15:35:32.513393   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:32.618] E0911 15:35:32.618414   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:32.716] E0911 15:35:32.715506   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:33.421] E0911 15:35:33.421275   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:33.515] E0911 15:35:33.514590   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:35:33.615] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:33.672] (Breplicationcontroller/busybox0 created
I0911 15:35:33.675] replicationcontroller/busybox1 created
I0911 15:35:33.765] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0911 15:35:33.845] (BSuccessful
I0911 15:35:33.845] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0911 15:35:33.847] message:no rollbacker has been implemented for "ReplicationController"
I0911 15:35:33.847] no rollbacker has been implemented for "ReplicationController"
I0911 15:35:33.847] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:33.848] has:Object 'Kind' is missing
I0911 15:35:33.933] Successful
I0911 15:35:33.933] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:33.933] error: replicationcontrollers "busybox0" pausing is not supported
I0911 15:35:33.934] error: replicationcontrollers "busybox1" pausing is not supported
I0911 15:35:33.934] has:Object 'Kind' is missing
I0911 15:35:33.935] Successful
I0911 15:35:33.936] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:33.936] error: replicationcontrollers "busybox0" pausing is not supported
I0911 15:35:33.936] error: replicationcontrollers "busybox1" pausing is not supported
I0911 15:35:33.937] has:replicationcontrollers "busybox0" pausing is not supported
I0911 15:35:33.937] Successful
I0911 15:35:33.938] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:33.938] error: replicationcontrollers "busybox0" pausing is not supported
I0911 15:35:33.938] error: replicationcontrollers "busybox1" pausing is not supported
I0911 15:35:33.938] has:replicationcontrollers "busybox1" pausing is not supported
I0911 15:35:34.018] Successful
I0911 15:35:34.019] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:34.019] error: replicationcontrollers "busybox0" resuming is not supported
I0911 15:35:34.019] error: replicationcontrollers "busybox1" resuming is not supported
I0911 15:35:34.019] has:Object 'Kind' is missing
I0911 15:35:34.020] Successful
I0911 15:35:34.020] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:34.020] error: replicationcontrollers "busybox0" resuming is not supported
I0911 15:35:34.021] error: replicationcontrollers "busybox1" resuming is not supported
I0911 15:35:34.021] has:replicationcontrollers "busybox0" resuming is not supported
I0911 15:35:34.022] Successful
I0911 15:35:34.023] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0911 15:35:34.023] error: replicationcontrollers "busybox0" resuming is not supported
I0911 15:35:34.024] error: replicationcontrollers "busybox1" resuming is not supported
I0911 15:35:34.024] has:replicationcontrollers "busybox0" resuming is not supported
I0911 15:35:34.093] replicationcontroller "busybox0" force deleted
I0911 15:35:34.097] replicationcontroller "busybox1" force deleted
W0911 15:35:34.198] E0911 15:35:33.619753   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:34.198] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0911 15:35:34.199] I0911 15:35:33.676210   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216125-11045", Name:"busybox0", UID:"263c57de-d355-45bb-aa9a-694c614fae0c", APIVersion:"v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-crlpc
W0911 15:35:34.199] I0911 15:35:33.679103   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216125-11045", Name:"busybox1", UID:"9c91961c-566d-4e34-96b4-82b60f9443cc", APIVersion:"v1", ResourceVersion:"1061", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-kpqqq
W0911 15:35:34.199] E0911 15:35:33.716681   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:34.200] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0911 15:35:34.200] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0911 15:35:34.423] E0911 15:35:34.422581   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:34.516] E0911 15:35:34.515737   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:34.621] E0911 15:35:34.621099   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:34.718] E0911 15:35:34.717965   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:35:35.102] Recording: run_namespace_tests
I0911 15:35:35.102] Running command: run_namespace_tests
I0911 15:35:35.125] 
I0911 15:35:35.127] +++ Running case: test-cmd.run_namespace_tests 
I0911 15:35:35.130] +++ working dir: /go/src/k8s.io/kubernetes
I0911 15:35:35.132] +++ command: run_namespace_tests
I0911 15:35:35.158] +++ [0911 15:35:35] Testing kubectl(v1:namespaces)
I0911 15:35:35.217] namespace/my-namespace created
I0911 15:35:35.301] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0911 15:35:35.366] (Bnamespace "my-namespace" deleted
W0911 15:35:35.467] E0911 15:35:35.423820   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:35.517] E0911 15:35:35.516873   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:35.623] E0911 15:35:35.622398   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:35.719] E0911 15:35:35.719176   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:36.425] E0911 15:35:36.425147   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:36.518] E0911 15:35:36.518091   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:36.624] E0911 15:35:36.624148   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:36.721] E0911 15:35:36.720534   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:37.427] E0911 15:35:37.426567   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:37.520] E0911 15:35:37.519386   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:37.626] E0911 15:35:37.625547   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:37.722] E0911 15:35:37.721625   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:38.428] E0911 15:35:38.427746   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:38.522] E0911 15:35:38.521549   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:38.628] E0911 15:35:38.628100   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:38.723] E0911 15:35:38.723100   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:39.429] E0911 15:35:39.429095   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:39.524] E0911 15:35:39.523927   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:39.630] E0911 15:35:39.629387   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:39.725] E0911 15:35:39.724530   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:39.990] I0911 15:35:39.989903   52874 shared_informer.go:197] Waiting for caches to sync for resource quota
W0911 15:35:40.090] I0911 15:35:40.090425   52874 shared_informer.go:204] Caches are synced for resource quota 
W0911 15:35:40.430] E0911 15:35:40.429864   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:40.497] I0911 15:35:40.497436   52874 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0911 15:35:40.525] E0911 15:35:40.525188   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:40.598] I0911 15:35:40.597961   52874 shared_informer.go:204] Caches are synced for garbage collector 
W0911 15:35:40.631] E0911 15:35:40.631046   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:40.726] E0911 15:35:40.725984   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:35:40.827] namespace/my-namespace condition met
I0911 15:35:40.827] Successful
I0911 15:35:40.828] message:Error from server (NotFound): namespaces "my-namespace" not found
I0911 15:35:40.828] has: not found
I0911 15:35:40.828] namespace/my-namespace created
I0911 15:35:40.828] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0911 15:35:40.864] (BSuccessful
I0911 15:35:40.864] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0911 15:35:40.864] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0911 15:35:40.869] namespace "namespace-1568216101-17382" deleted
I0911 15:35:40.869] namespace "namespace-1568216102-23450" deleted
I0911 15:35:40.869] namespace "namespace-1568216104-5152" deleted
I0911 15:35:40.869] namespace "namespace-1568216105-19019" deleted
I0911 15:35:40.869] namespace "namespace-1568216124-10002" deleted
I0911 15:35:40.869] namespace "namespace-1568216125-11045" deleted
I0911 15:35:40.869] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0911 15:35:40.870] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0911 15:35:40.870] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0911 15:35:40.870] has:warning: deleting cluster-scoped resources
I0911 15:35:40.870] Successful
I0911 15:35:40.870] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0911 15:35:40.870] namespace "kube-node-lease" deleted
I0911 15:35:40.870] namespace "my-namespace" deleted
I0911 15:35:40.870] namespace "namespace-1568216018-1729" deleted
... skipping 27 lines ...
I0911 15:35:40.875] namespace "namespace-1568216101-17382" deleted
I0911 15:35:40.875] namespace "namespace-1568216102-23450" deleted
I0911 15:35:40.875] namespace "namespace-1568216104-5152" deleted
I0911 15:35:40.875] namespace "namespace-1568216105-19019" deleted
I0911 15:35:40.875] namespace "namespace-1568216124-10002" deleted
I0911 15:35:40.875] namespace "namespace-1568216125-11045" deleted
I0911 15:35:40.875] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0911 15:35:40.876] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0911 15:35:40.876] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0911 15:35:40.876] has:namespace "my-namespace" deleted
I0911 15:35:40.957] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0911 15:35:41.021] (Bnamespace/other created
I0911 15:35:41.110] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0911 15:35:41.202] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:41.348] (Bpod/valid-pod created
I0911 15:35:41.448] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0911 15:35:41.530] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0911 15:35:41.611] (BSuccessful
I0911 15:35:41.611] message:error: a resource cannot be retrieved by name across all namespaces
I0911 15:35:41.611] has:a resource cannot be retrieved by name across all namespaces
I0911 15:35:41.702] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0911 15:35:41.780] (Bpod "valid-pod" force deleted
I0911 15:35:41.868] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:41.949] (Bnamespace "other" deleted
W0911 15:35:42.050] E0911 15:35:41.431166   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:42.050] E0911 15:35:41.530606   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:42.051] E0911 15:35:41.632237   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:42.051] E0911 15:35:41.727419   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:42.051] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0911 15:35:42.433] E0911 15:35:42.432591   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:42.532] E0911 15:35:42.532005   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:42.634] E0911 15:35:42.633734   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:42.729] E0911 15:35:42.729097   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:43.434] E0911 15:35:43.433677   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:43.534] E0911 15:35:43.533836   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:43.635] E0911 15:35:43.634886   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:43.731] E0911 15:35:43.730525   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:44.435] E0911 15:35:44.434892   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:44.536] E0911 15:35:44.535569   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:44.636] E0911 15:35:44.636257   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:44.732] E0911 15:35:44.731737   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:44.753] I0911 15:35:44.753169   52874 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1568216125-11045
W0911 15:35:44.759] I0911 15:35:44.758736   52874 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1568216125-11045
W0911 15:35:45.436] E0911 15:35:45.436270   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:45.537] E0911 15:35:45.536859   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:45.638] E0911 15:35:45.637794   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:45.733] E0911 15:35:45.733036   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:46.438] E0911 15:35:46.437906   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:46.538] E0911 15:35:46.537699   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:46.639] E0911 15:35:46.639165   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:46.734] E0911 15:35:46.734338   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:35:47.040] +++ exit code: 0
I0911 15:35:47.069] Recording: run_secrets_test
I0911 15:35:47.070] Running command: run_secrets_test
I0911 15:35:47.089] 
I0911 15:35:47.091] +++ Running case: test-cmd.run_secrets_test 
I0911 15:35:47.094] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 58 lines ...
I0911 15:35:48.759] (Bsecret "test-secret" deleted
I0911 15:35:48.826] secret/test-secret created
I0911 15:35:48.904] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0911 15:35:48.988] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0911 15:35:49.062] (Bsecret "test-secret" deleted
W0911 15:35:49.162] I0911 15:35:47.298190   68974 loader.go:375] Config loaded from file:  /tmp/tmp.rT4f6UCVml/.kube/config
W0911 15:35:49.163] E0911 15:35:47.439208   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.163] E0911 15:35:47.538989   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.163] E0911 15:35:47.640643   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.164] E0911 15:35:47.735604   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.164] E0911 15:35:48.440363   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.164] E0911 15:35:48.539974   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.164] E0911 15:35:48.641953   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.164] E0911 15:35:48.737336   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:35:49.265] secret/secret-string-data created
I0911 15:35:49.302] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0911 15:35:49.380] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0911 15:35:49.459] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0911 15:35:49.530] (Bsecret "secret-string-data" deleted
I0911 15:35:49.613] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:35:49.753] (Bsecret "test-secret" deleted
I0911 15:35:49.824] namespace "test-secrets" deleted
W0911 15:35:49.925] E0911 15:35:49.441513   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.926] E0911 15:35:49.541150   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.926] E0911 15:35:49.643122   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:49.927] E0911 15:35:49.738763   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:50.443] E0911 15:35:50.442750   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:50.522] I0911 15:35:50.521716   52874 namespace_controller.go:171] Namespace has been deleted my-namespace
W0911 15:35:50.543] E0911 15:35:50.542722   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:50.645] E0911 15:35:50.644440   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:50.740] E0911 15:35:50.740023   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:50.949] I0911 15:35:50.948887   52874 namespace_controller.go:171] Namespace has been deleted kube-node-lease
W0911 15:35:50.974] I0911 15:35:50.974252   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216031-8003
W0911 15:35:50.977] I0911 15:35:50.977480   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216034-9216
W0911 15:35:50.978] I0911 15:35:50.977480   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216020-14022
W0911 15:35:50.979] I0911 15:35:50.979022   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216018-1729
W0911 15:35:50.979] I0911 15:35:50.979241   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216034-7040
... skipping 16 lines ...
W0911 15:35:51.373] I0911 15:35:51.372842   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216083-30496
W0911 15:35:51.380] I0911 15:35:51.380222   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216083-29613
W0911 15:35:51.390] I0911 15:35:51.390135   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216097-13648
W0911 15:35:51.409] I0911 15:35:51.408406   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216101-17302
W0911 15:35:51.413] I0911 15:35:51.413130   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216091-22261
W0911 15:35:51.437] I0911 15:35:51.436806   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216065-28491
W0911 15:35:51.444] E0911 15:35:51.443992   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:51.450] I0911 15:35:51.450301   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216101-17382
W0911 15:35:51.455] I0911 15:35:51.455046   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216098-18874
W0911 15:35:51.527] I0911 15:35:51.526917   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216105-19019
W0911 15:35:51.529] I0911 15:35:51.528925   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216104-5152
W0911 15:35:51.529] I0911 15:35:51.528980   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216102-23450
W0911 15:35:51.543] I0911 15:35:51.543373   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216124-10002
W0911 15:35:51.544] E0911 15:35:51.543985   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:51.565] I0911 15:35:51.564972   52874 namespace_controller.go:171] Namespace has been deleted namespace-1568216125-11045
W0911 15:35:51.646] E0911 15:35:51.645615   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:51.741] E0911 15:35:51.741349   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:52.019] I0911 15:35:52.018777   52874 namespace_controller.go:171] Namespace has been deleted other
W0911 15:35:52.446] E0911 15:35:52.445398   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:52.545] E0911 15:35:52.545243   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:52.647] E0911 15:35:52.646993   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:52.743] E0911 15:35:52.742559   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:53.447] E0911 15:35:53.446666   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:53.547] E0911 15:35:53.546560   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:53.649] E0911 15:35:53.649315   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:53.744] E0911 15:35:53.743802   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:54.448] E0911 15:35:54.447894   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:54.548] E0911 15:35:54.547894   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:54.651] E0911 15:35:54.650480   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:54.745] E0911 15:35:54.745207   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:35:54.918] +++ exit code: 0
I0911 15:35:54.946] Recording: run_configmap_tests
I0911 15:35:54.946] Running command: run_configmap_tests
I0911 15:35:54.966] 
I0911 15:35:54.968] +++ Running case: test-cmd.run_configmap_tests 
I0911 15:35:54.970] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0911 15:35:55.927] configmap/test-binary-configmap created
I0911 15:35:56.004] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0911 15:35:56.076] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0911 15:35:56.289] (Bconfigmap "test-configmap" deleted
I0911 15:35:56.361] configmap "test-binary-configmap" deleted
I0911 15:35:56.430] namespace "test-configmaps" deleted
W0911 15:35:56.531] E0911 15:35:55.448731   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:56.531] E0911 15:35:55.549135   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:56.532] E0911 15:35:55.651488   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:56.532] E0911 15:35:55.746313   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:56.532] E0911 15:35:56.449997   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:56.551] E0911 15:35:56.550827   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:56.653] E0911 15:35:56.653279   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:56.748] E0911 15:35:56.747738   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:57.452] E0911 15:35:57.451439   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:57.552] E0911 15:35:57.552292   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:57.656] E0911 15:35:57.655474   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:57.749] E0911 15:35:57.748771   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:58.453] E0911 15:35:58.452789   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:58.554] E0911 15:35:58.553782   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:58.657] E0911 15:35:58.656756   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:58.750] E0911 15:35:58.750098   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:59.454] E0911 15:35:59.454079   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:59.555] E0911 15:35:59.555053   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:59.658] E0911 15:35:59.658007   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:59.752] E0911 15:35:59.751387   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:35:59.898] I0911 15:35:59.897656   52874 namespace_controller.go:171] Namespace has been deleted test-secrets
W0911 15:36:00.456] E0911 15:36:00.455479   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:00.557] E0911 15:36:00.556870   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:00.660] E0911 15:36:00.659529   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:00.753] E0911 15:36:00.752700   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:01.457] E0911 15:36:01.456809   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:01.557] +++ exit code: 0
I0911 15:36:01.558] Recording: run_client_config_tests
I0911 15:36:01.558] Running command: run_client_config_tests
I0911 15:36:01.576] 
I0911 15:36:01.579] +++ Running case: test-cmd.run_client_config_tests 
I0911 15:36:01.581] +++ working dir: /go/src/k8s.io/kubernetes
I0911 15:36:01.584] +++ command: run_client_config_tests
I0911 15:36:01.594] +++ [0911 15:36:01] Creating namespace namespace-1568216161-22256
I0911 15:36:01.656] namespace/namespace-1568216161-22256 created
I0911 15:36:01.716] Context "test" modified.
I0911 15:36:01.721] +++ [0911 15:36:01] Testing client config
I0911 15:36:01.783] Successful
I0911 15:36:01.783] message:error: stat missing: no such file or directory
I0911 15:36:01.784] has:missing: no such file or directory
I0911 15:36:01.844] Successful
I0911 15:36:01.844] message:error: stat missing: no such file or directory
I0911 15:36:01.845] has:missing: no such file or directory
I0911 15:36:01.904] Successful
I0911 15:36:01.905] message:error: stat missing: no such file or directory
I0911 15:36:01.905] has:missing: no such file or directory
I0911 15:36:01.968] Successful
I0911 15:36:01.968] message:Error in configuration: context was not found for specified context: missing-context
I0911 15:36:01.969] has:context was not found for specified context: missing-context
I0911 15:36:02.028] Successful
I0911 15:36:02.029] message:error: no server found for cluster "missing-cluster"
I0911 15:36:02.029] has:no server found for cluster "missing-cluster"
I0911 15:36:02.091] Successful
I0911 15:36:02.091] message:error: auth info "missing-user" does not exist
I0911 15:36:02.091] has:auth info "missing-user" does not exist
W0911 15:36:02.192] E0911 15:36:01.558107   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:02.192] E0911 15:36:01.660353   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:02.192] E0911 15:36:01.753969   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:02.293] Successful
I0911 15:36:02.293] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0911 15:36:02.293] has:error loading config file
I0911 15:36:02.293] Successful
I0911 15:36:02.294] message:error: stat missing-config: no such file or directory
I0911 15:36:02.294] has:no such file or directory
I0911 15:36:02.294] +++ exit code: 0
I0911 15:36:02.322] Recording: run_service_accounts_tests
I0911 15:36:02.322] Running command: run_service_accounts_tests
I0911 15:36:02.342] 
I0911 15:36:02.344] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0911 15:36:02.636] (Bnamespace/test-service-accounts created
I0911 15:36:02.716] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0911 15:36:02.780] (Bserviceaccount/test-service-account created
I0911 15:36:02.858] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0911 15:36:02.924] (Bserviceaccount "test-service-account" deleted
I0911 15:36:02.993] namespace "test-service-accounts" deleted
W0911 15:36:03.094] E0911 15:36:02.458086   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:03.095] E0911 15:36:02.559487   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:03.095] E0911 15:36:02.661486   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:03.095] E0911 15:36:02.755321   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:03.459] E0911 15:36:03.459220   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:03.561] E0911 15:36:03.560994   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:03.663] E0911 15:36:03.662740   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:03.757] E0911 15:36:03.756716   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:04.461] E0911 15:36:04.460442   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:04.562] E0911 15:36:04.562240   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:04.664] E0911 15:36:04.664102   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:04.758] E0911 15:36:04.758138   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:05.462] E0911 15:36:05.461599   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:05.564] E0911 15:36:05.563723   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:05.666] E0911 15:36:05.665436   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:05.760] E0911 15:36:05.759474   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:06.463] E0911 15:36:06.462794   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:06.505] I0911 15:36:06.505141   52874 namespace_controller.go:171] Namespace has been deleted test-configmaps
W0911 15:36:06.565] E0911 15:36:06.564955   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:06.667] E0911 15:36:06.666808   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:06.761] E0911 15:36:06.760631   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:07.464] E0911 15:36:07.464105   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:07.567] E0911 15:36:07.566453   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:07.668] E0911 15:36:07.668083   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:07.762] E0911 15:36:07.761871   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:08.092] +++ exit code: 0
I0911 15:36:08.122] Recording: run_job_tests
I0911 15:36:08.122] Running command: run_job_tests
I0911 15:36:08.142] 
I0911 15:36:08.144] +++ Running case: test-cmd.run_job_tests 
I0911 15:36:08.146] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0911 15:36:08.794] Labels:                        run=pi
I0911 15:36:08.794] Annotations:                   <none>
I0911 15:36:08.795] Schedule:                      59 23 31 2 *
I0911 15:36:08.795] Concurrency Policy:            Allow
I0911 15:36:08.795] Suspend:                       False
I0911 15:36:08.795] Successful Job History Limit:  3
I0911 15:36:08.795] Failed Job History Limit:      1
I0911 15:36:08.795] Starting Deadline Seconds:     <unset>
I0911 15:36:08.795] Selector:                      <unset>
I0911 15:36:08.796] Parallelism:                   <unset>
I0911 15:36:08.796] Completions:                   <unset>
I0911 15:36:08.796] Pod Template:
I0911 15:36:08.796]   Labels:  run=pi
... skipping 32 lines ...
I0911 15:36:09.283]                 run=pi
I0911 15:36:09.284] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0911 15:36:09.284] Controlled By:  CronJob/pi
I0911 15:36:09.284] Parallelism:    1
I0911 15:36:09.284] Completions:    1
I0911 15:36:09.284] Start Time:     Wed, 11 Sep 2019 15:36:09 +0000
I0911 15:36:09.284] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0911 15:36:09.284] Pod Template:
I0911 15:36:09.284]   Labels:  controller-uid=ddd13907-7aa1-4b36-9e9f-110d989bb7d0
I0911 15:36:09.285]            job-name=test-job
I0911 15:36:09.285]            run=pi
I0911 15:36:09.285]   Containers:
I0911 15:36:09.285]    pi:
... skipping 15 lines ...
I0911 15:36:09.287]   Type    Reason            Age   From            Message
I0911 15:36:09.287]   ----    ------            ----  ----            -------
I0911 15:36:09.287]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-894rn
I0911 15:36:09.358] job.batch "test-job" deleted
I0911 15:36:09.438] cronjob.batch "pi" deleted
I0911 15:36:09.510] namespace "test-jobs" deleted
W0911 15:36:09.611] E0911 15:36:08.465240   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:09.612] E0911 15:36:08.567750   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:09.612] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0911 15:36:09.612] E0911 15:36:08.669263   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:09.613] E0911 15:36:08.763052   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:09.613] I0911 15:36:09.026929   52874 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"ddd13907-7aa1-4b36-9e9f-110d989bb7d0", APIVersion:"batch/v1", ResourceVersion:"1382", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-894rn
W0911 15:36:09.613] E0911 15:36:09.466268   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:09.613] E0911 15:36:09.568625   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:09.671] E0911 15:36:09.670664   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:09.765] E0911 15:36:09.764563   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:10.467] E0911 15:36:10.467447   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:10.570] E0911 15:36:10.569806   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:10.672] E0911 15:36:10.672047   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:10.766] E0911 15:36:10.765743   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:11.469] E0911 15:36:11.468666   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:11.571] E0911 15:36:11.571184   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:11.673] E0911 15:36:11.673380   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:11.767] E0911 15:36:11.767051   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:12.470] E0911 15:36:12.469912   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:12.573] E0911 15:36:12.572471   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:12.675] E0911 15:36:12.674802   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:12.769] E0911 15:36:12.768396   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:13.071] I0911 15:36:13.071195   52874 namespace_controller.go:171] Namespace has been deleted test-service-accounts
W0911 15:36:13.471] E0911 15:36:13.471086   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:13.574] E0911 15:36:13.573748   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:13.676] E0911 15:36:13.676130   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:13.770] E0911 15:36:13.769544   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:14.473] E0911 15:36:14.472437   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:14.576] E0911 15:36:14.575647   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:14.676] +++ exit code: 0
I0911 15:36:14.677] Recording: run_create_job_tests
I0911 15:36:14.677] Running command: run_create_job_tests
I0911 15:36:14.677] 
I0911 15:36:14.677] +++ Running case: test-cmd.run_create_job_tests 
I0911 15:36:14.678] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 27 lines ...
I0911 15:36:15.827] +++ [0911 15:36:15] Testing pod templates
I0911 15:36:15.904] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:16.047] (Bpodtemplate/nginx created
I0911 15:36:16.132] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0911 15:36:16.200] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0911 15:36:16.200] nginx   nginx        nginx    name=nginx
W0911 15:36:16.300] E0911 15:36:14.679444   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:16.301] E0911 15:36:14.771740   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:16.301] I0911 15:36:14.870629   52874 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568216174-5062", Name:"test-job", UID:"1f03a5ae-7897-4c20-966f-fae7456d44e4", APIVersion:"batch/v1", ResourceVersion:"1400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-tf2mc
W0911 15:36:16.301] I0911 15:36:15.086705   52874 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568216174-5062", Name:"test-job-pi", UID:"c1f1a356-e0fd-4412-b686-08475768ba4b", APIVersion:"batch/v1", ResourceVersion:"1407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-4rdtf
W0911 15:36:16.302] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0911 15:36:16.302] I0911 15:36:15.390619   52874 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568216174-5062", Name:"my-pi", UID:"72b99857-0ff8-44db-869d-19d68a1874d1", APIVersion:"batch/v1", ResourceVersion:"1416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-62xt5
W0911 15:36:16.302] E0911 15:36:15.473488   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:16.302] E0911 15:36:15.576773   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:16.302] E0911 15:36:15.680809   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:16.303] E0911 15:36:15.773086   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:16.303] I0911 15:36:16.044802   49323 controller.go:606] quota admission added evaluator for: podtemplates
I0911 15:36:16.403] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0911 15:36:16.421] (Bpodtemplate "nginx" deleted
I0911 15:36:16.502] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:16.513] (B+++ exit code: 0
I0911 15:36:16.542] Recording: run_service_tests
... skipping 66 lines ...
I0911 15:36:17.302] Port:              <unset>  6379/TCP
I0911 15:36:17.302] TargetPort:        6379/TCP
I0911 15:36:17.302] Endpoints:         <none>
I0911 15:36:17.302] Session Affinity:  None
I0911 15:36:17.303] Events:            <none>
I0911 15:36:17.303] (B
W0911 15:36:17.403] E0911 15:36:16.474723   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:17.404] E0911 15:36:16.578109   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:17.404] E0911 15:36:16.681918   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:17.404] E0911 15:36:16.774238   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:17.476] E0911 15:36:17.475819   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:17.576] Successful describe services:
I0911 15:36:17.577] Name:              kubernetes
I0911 15:36:17.577] Namespace:         default
I0911 15:36:17.577] Labels:            component=apiserver
I0911 15:36:17.577]                    provider=kubernetes
I0911 15:36:17.577] Annotations:       <none>
... skipping 178 lines ...
I0911 15:36:18.224]   selector:
I0911 15:36:18.224]     role: padawan
I0911 15:36:18.224]   sessionAffinity: None
I0911 15:36:18.224]   type: ClusterIP
I0911 15:36:18.224] status:
I0911 15:36:18.224]   loadBalancer: {}
W0911 15:36:18.325] E0911 15:36:17.579289   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:18.325] E0911 15:36:17.683197   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:18.326] E0911 15:36:17.775739   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:18.326] error: you must specify resources by --filename when --local is set.
W0911 15:36:18.326] Example resource specifications include:
W0911 15:36:18.326]    '-f rsrc.yaml'
W0911 15:36:18.326]    '--filename=rsrc.json'
I0911 15:36:18.427] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0911 15:36:18.493] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0911 15:36:18.566] (Bservice "redis-master" deleted
... skipping 8 lines ...
I0911 15:36:19.506] core.sh:952: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
I0911 15:36:19.583] (Bservice "redis-master" deleted
I0911 15:36:19.667] service "service-v1-test" deleted
I0911 15:36:19.754] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0911 15:36:19.835] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0911 15:36:19.967] (Bservice/redis-master created
W0911 15:36:20.068] E0911 15:36:18.476979   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:20.068] E0911 15:36:18.580286   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:20.069] E0911 15:36:18.684636   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:20.069] E0911 15:36:18.777120   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:20.069] E0911 15:36:19.478168   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:20.069] E0911 15:36:19.581803   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:20.069] I0911 15:36:19.593918   52874 namespace_controller.go:171] Namespace has been deleted test-jobs
W0911 15:36:20.070] E0911 15:36:19.686134   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:20.070] E0911 15:36:19.778770   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:20.170] service/redis-slave created
I0911 15:36:20.189] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0911 15:36:20.260] (BSuccessful
I0911 15:36:20.260] message:NAME           RSRC
I0911 15:36:20.260] kubernetes     145
I0911 15:36:20.260] redis-master   1450
... skipping 84 lines ...
I0911 15:36:24.436] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:36:24.510] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0911 15:36:24.594] (Bdaemonset.apps/bind rolled back
I0911 15:36:24.681] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0911 15:36:24.760] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0911 15:36:24.845] (BSuccessful
I0911 15:36:24.845] message:error: unable to find specified revision 1000000 in history
I0911 15:36:24.845] has:unable to find specified revision
I0911 15:36:24.921] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0911 15:36:24.997] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0911 15:36:25.082] (Bdaemonset.apps/bind rolled back
I0911 15:36:25.166] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0911 15:36:25.243] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0911 15:36:26.384] Namespace:    namespace-1568216185-3803
I0911 15:36:26.385] Selector:     app=guestbook,tier=frontend
I0911 15:36:26.385] Labels:       app=guestbook
I0911 15:36:26.385]               tier=frontend
I0911 15:36:26.385] Annotations:  <none>
I0911 15:36:26.385] Replicas:     3 current / 3 desired
I0911 15:36:26.385] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:26.385] Pod Template:
I0911 15:36:26.385]   Labels:  app=guestbook
I0911 15:36:26.385]            tier=frontend
I0911 15:36:26.385]   Containers:
I0911 15:36:26.385]    php-redis:
I0911 15:36:26.385]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0911 15:36:26.476] Namespace:    namespace-1568216185-3803
I0911 15:36:26.476] Selector:     app=guestbook,tier=frontend
I0911 15:36:26.476] Labels:       app=guestbook
I0911 15:36:26.476]               tier=frontend
I0911 15:36:26.476] Annotations:  <none>
I0911 15:36:26.476] Replicas:     3 current / 3 desired
I0911 15:36:26.476] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:26.477] Pod Template:
I0911 15:36:26.477]   Labels:  app=guestbook
I0911 15:36:26.477]            tier=frontend
I0911 15:36:26.477]   Containers:
I0911 15:36:26.477]    php-redis:
I0911 15:36:26.477]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0911 15:36:26.567] Namespace:    namespace-1568216185-3803
I0911 15:36:26.567] Selector:     app=guestbook,tier=frontend
I0911 15:36:26.568] Labels:       app=guestbook
I0911 15:36:26.568]               tier=frontend
I0911 15:36:26.568] Annotations:  <none>
I0911 15:36:26.568] Replicas:     3 current / 3 desired
I0911 15:36:26.568] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:26.568] Pod Template:
I0911 15:36:26.568]   Labels:  app=guestbook
I0911 15:36:26.568]            tier=frontend
I0911 15:36:26.568]   Containers:
I0911 15:36:26.568]    php-redis:
I0911 15:36:26.568]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0911 15:36:26.665] Namespace:    namespace-1568216185-3803
I0911 15:36:26.665] Selector:     app=guestbook,tier=frontend
I0911 15:36:26.666] Labels:       app=guestbook
I0911 15:36:26.666]               tier=frontend
I0911 15:36:26.666] Annotations:  <none>
I0911 15:36:26.666] Replicas:     3 current / 3 desired
I0911 15:36:26.666] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:26.666] Pod Template:
I0911 15:36:26.667]   Labels:  app=guestbook
I0911 15:36:26.667]            tier=frontend
I0911 15:36:26.667]   Containers:
I0911 15:36:26.667]    php-redis:
I0911 15:36:26.667]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0911 15:36:26.668]   Type    Reason            Age   From                    Message
I0911 15:36:26.669]   ----    ------            ----  ----                    -------
I0911 15:36:26.669]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-hfcmq
I0911 15:36:26.669]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-7fs5t
I0911 15:36:26.670]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-p9w52
I0911 15:36:26.670] (B
W0911 15:36:26.770] E0911 15:36:20.479376   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.771] E0911 15:36:20.582822   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.771] E0911 15:36:20.687242   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.771] E0911 15:36:20.779988   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.771] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0911 15:36:26.772] I0911 15:36:21.082685   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"05b7ee33-6fa6-4306-b1ae-4e03c5921400", APIVersion:"apps/v1", ResourceVersion:"1465", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W0911 15:36:26.772] I0911 15:36:21.087407   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"ec6edcbd-ba2d-47e6-9a31-9e97bed4d006", APIVersion:"apps/v1", ResourceVersion:"1466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-h9pcg
W0911 15:36:26.772] I0911 15:36:21.089377   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"ec6edcbd-ba2d-47e6-9a31-9e97bed4d006", APIVersion:"apps/v1", ResourceVersion:"1466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-k92zv
W0911 15:36:26.772] E0911 15:36:21.480535   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.773] E0911 15:36:21.583674   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.773] E0911 15:36:21.688637   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.773] E0911 15:36:21.781111   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.773] I0911 15:36:21.994466   49323 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0911 15:36:26.773] I0911 15:36:22.002515   49323 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0911 15:36:26.773] E0911 15:36:22.481784   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.774] E0911 15:36:22.584851   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.774] E0911 15:36:22.689779   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.774] E0911 15:36:22.782445   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.774] E0911 15:36:23.482956   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.774] E0911 15:36:23.585870   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.775] E0911 15:36:23.691078   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.775] E0911 15:36:23.783501   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.775] E0911 15:36:24.484168   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.775] E0911 15:36:24.586898   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.775] E0911 15:36:24.692423   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.776] E0911 15:36:24.784652   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.776] E0911 15:36:25.485254   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.776] E0911 15:36:25.587995   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.776] E0911 15:36:25.693591   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.777] E0911 15:36:25.785932   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.777] I0911 15:36:25.806989   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"4d971621-7b87-4403-863a-1bded7e82f4b", APIVersion:"v1", ResourceVersion:"1542", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qbb95
W0911 15:36:26.777] I0911 15:36:25.809191   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"4d971621-7b87-4403-863a-1bded7e82f4b", APIVersion:"v1", ResourceVersion:"1542", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vlwhv
W0911 15:36:26.777] I0911 15:36:25.809867   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"4d971621-7b87-4403-863a-1bded7e82f4b", APIVersion:"v1", ResourceVersion:"1542", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4kp4m
W0911 15:36:26.778] I0911 15:36:26.174720   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"99214afb-f7c3-4840-b20f-7bf3920d707a", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hfcmq
W0911 15:36:26.778] I0911 15:36:26.176827   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"99214afb-f7c3-4840-b20f-7bf3920d707a", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7fs5t
W0911 15:36:26.778] I0911 15:36:26.178570   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"99214afb-f7c3-4840-b20f-7bf3920d707a", APIVersion:"v1", ResourceVersion:"1558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p9w52
W0911 15:36:26.778] E0911 15:36:26.486286   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.779] E0911 15:36:26.589165   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.779] E0911 15:36:26.694820   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:26.787] E0911 15:36:26.787030   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:26.888] Successful describe rc:
I0911 15:36:26.888] Name:         frontend
I0911 15:36:26.888] Namespace:    namespace-1568216185-3803
I0911 15:36:26.888] Selector:     app=guestbook,tier=frontend
I0911 15:36:26.888] Labels:       app=guestbook
I0911 15:36:26.888]               tier=frontend
I0911 15:36:26.888] Annotations:  <none>
I0911 15:36:26.889] Replicas:     3 current / 3 desired
I0911 15:36:26.889] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:26.889] Pod Template:
I0911 15:36:26.889]   Labels:  app=guestbook
I0911 15:36:26.889]            tier=frontend
I0911 15:36:26.889]   Containers:
I0911 15:36:26.889]    php-redis:
I0911 15:36:26.889]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0911 15:36:26.891] Namespace:    namespace-1568216185-3803
I0911 15:36:26.891] Selector:     app=guestbook,tier=frontend
I0911 15:36:26.891] Labels:       app=guestbook
I0911 15:36:26.891]               tier=frontend
I0911 15:36:26.891] Annotations:  <none>
I0911 15:36:26.891] Replicas:     3 current / 3 desired
I0911 15:36:26.891] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:26.891] Pod Template:
I0911 15:36:26.891]   Labels:  app=guestbook
I0911 15:36:26.891]            tier=frontend
I0911 15:36:26.891]   Containers:
I0911 15:36:26.892]    php-redis:
I0911 15:36:26.892]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0911 15:36:26.973] Namespace:    namespace-1568216185-3803
I0911 15:36:26.973] Selector:     app=guestbook,tier=frontend
I0911 15:36:26.973] Labels:       app=guestbook
I0911 15:36:26.973]               tier=frontend
I0911 15:36:26.973] Annotations:  <none>
I0911 15:36:26.974] Replicas:     3 current / 3 desired
I0911 15:36:26.974] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:26.974] Pod Template:
I0911 15:36:26.974]   Labels:  app=guestbook
I0911 15:36:26.974]            tier=frontend
I0911 15:36:26.974]   Containers:
I0911 15:36:26.974]    php-redis:
I0911 15:36:26.974]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0911 15:36:27.060] Namespace:    namespace-1568216185-3803
I0911 15:36:27.060] Selector:     app=guestbook,tier=frontend
I0911 15:36:27.060] Labels:       app=guestbook
I0911 15:36:27.061]               tier=frontend
I0911 15:36:27.061] Annotations:  <none>
I0911 15:36:27.061] Replicas:     3 current / 3 desired
I0911 15:36:27.061] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:27.061] Pod Template:
I0911 15:36:27.061]   Labels:  app=guestbook
I0911 15:36:27.061]            tier=frontend
I0911 15:36:27.061]   Containers:
I0911 15:36:27.061]    php-redis:
I0911 15:36:27.061]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0911 15:36:27.754] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0911 15:36:27.828] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0911 15:36:27.895] (Breplicationcontroller/frontend scaled
I0911 15:36:27.974] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0911 15:36:28.042] (Breplicationcontroller "frontend" deleted
W0911 15:36:28.143] I0911 15:36:27.213054   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"99214afb-f7c3-4840-b20f-7bf3920d707a", APIVersion:"v1", ResourceVersion:"1568", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-p9w52
W0911 15:36:28.144] error: Expected replicas to be 3, was 2
W0911 15:36:28.145] E0911 15:36:27.487395   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:28.145] E0911 15:36:27.590586   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:28.145] I0911 15:36:27.675138   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"99214afb-f7c3-4840-b20f-7bf3920d707a", APIVersion:"v1", ResourceVersion:"1574", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xmq62
W0911 15:36:28.146] E0911 15:36:27.695852   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:28.146] E0911 15:36:27.788273   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:28.147] I0911 15:36:27.900065   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"99214afb-f7c3-4840-b20f-7bf3920d707a", APIVersion:"v1", ResourceVersion:"1579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-xmq62
W0911 15:36:28.191] I0911 15:36:28.190484   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-master", UID:"ff55e319-c61e-4377-8f12-4ac55d681996", APIVersion:"v1", ResourceVersion:"1590", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-bxjzh
I0911 15:36:28.294] replicationcontroller/redis-master created
I0911 15:36:28.328] replicationcontroller/redis-slave created
I0911 15:36:28.408] replicationcontroller/redis-master scaled
I0911 15:36:28.412] replicationcontroller/redis-slave scaled
... skipping 5 lines ...
W0911 15:36:28.740] I0911 15:36:28.333586   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-slave", UID:"80619864-a35c-4efc-9666-05144987422f", APIVersion:"v1", ResourceVersion:"1595", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-zlv4d
W0911 15:36:28.740] I0911 15:36:28.410136   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-master", UID:"ff55e319-c61e-4377-8f12-4ac55d681996", APIVersion:"v1", ResourceVersion:"1602", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-574kl
W0911 15:36:28.741] I0911 15:36:28.413262   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-master", UID:"ff55e319-c61e-4377-8f12-4ac55d681996", APIVersion:"v1", ResourceVersion:"1602", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-7fnsm
W0911 15:36:28.742] I0911 15:36:28.413575   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-master", UID:"ff55e319-c61e-4377-8f12-4ac55d681996", APIVersion:"v1", ResourceVersion:"1602", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-f69jr
W0911 15:36:28.743] I0911 15:36:28.416345   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-slave", UID:"80619864-a35c-4efc-9666-05144987422f", APIVersion:"v1", ResourceVersion:"1604", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-pd4lc
W0911 15:36:28.744] I0911 15:36:28.418663   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-slave", UID:"80619864-a35c-4efc-9666-05144987422f", APIVersion:"v1", ResourceVersion:"1604", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-jpwm8
W0911 15:36:28.744] E0911 15:36:28.488916   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:28.745] E0911 15:36:28.591777   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:28.745] E0911 15:36:28.697020   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:28.785] I0911 15:36:28.784609   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment", UID:"61fe12e9-dfe3-4d26-af6a-a7de2c6ab463", APIVersion:"apps/v1", ResourceVersion:"1636", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0911 15:36:28.787] I0911 15:36:28.787091   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-6986c7bc94", UID:"64b578b0-43c3-426d-a84a-7729cdd55995", APIVersion:"apps/v1", ResourceVersion:"1637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-bhzp7
W0911 15:36:28.790] I0911 15:36:28.790382   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-6986c7bc94", UID:"64b578b0-43c3-426d-a84a-7729cdd55995", APIVersion:"apps/v1", ResourceVersion:"1637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-f6568
W0911 15:36:28.791] E0911 15:36:28.790489   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:28.791] I0911 15:36:28.790577   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-6986c7bc94", UID:"64b578b0-43c3-426d-a84a-7729cdd55995", APIVersion:"apps/v1", ResourceVersion:"1637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-7rncx
W0911 15:36:28.870] I0911 15:36:28.869347   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment", UID:"61fe12e9-dfe3-4d26-af6a-a7de2c6ab463", APIVersion:"apps/v1", ResourceVersion:"1650", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W0911 15:36:28.875] I0911 15:36:28.874776   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-6986c7bc94", UID:"64b578b0-43c3-426d-a84a-7729cdd55995", APIVersion:"apps/v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-7rncx
W0911 15:36:28.877] I0911 15:36:28.877400   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-6986c7bc94", UID:"64b578b0-43c3-426d-a84a-7729cdd55995", APIVersion:"apps/v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-bhzp7
I0911 15:36:28.978] deployment.apps/nginx-deployment created
I0911 15:36:28.978] deployment.apps/nginx-deployment scaled
I0911 15:36:28.979] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I0911 15:36:29.028] (Bdeployment.apps "nginx-deployment" deleted
I0911 15:36:29.125] Successful
I0911 15:36:29.125] message:service/expose-test-deployment exposed
I0911 15:36:29.125] has:service/expose-test-deployment exposed
I0911 15:36:29.214] service "expose-test-deployment" deleted
I0911 15:36:29.298] Successful
I0911 15:36:29.299] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0911 15:36:29.299] See 'kubectl expose -h' for help and examples
I0911 15:36:29.299] has:invalid deployment: no selectors
I0911 15:36:29.438] deployment.apps/nginx-deployment created
I0911 15:36:29.540] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0911 15:36:29.625] (Bservice/nginx-deployment exposed
I0911 15:36:29.711] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0911 15:36:29.783] (Bdeployment.apps "nginx-deployment" deleted
I0911 15:36:29.792] service "nginx-deployment" deleted
W0911 15:36:29.893] I0911 15:36:29.441738   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment", UID:"ed7df784-5eeb-4f06-a772-a406b73206a9", APIVersion:"apps/v1", ResourceVersion:"1676", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0911 15:36:29.893] I0911 15:36:29.444493   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-6986c7bc94", UID:"e8dbe7e1-c730-4147-95a0-cc3bab0e91af", APIVersion:"apps/v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-hk247
W0911 15:36:29.893] I0911 15:36:29.446568   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-6986c7bc94", UID:"e8dbe7e1-c730-4147-95a0-cc3bab0e91af", APIVersion:"apps/v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-vdgrc
W0911 15:36:29.894] I0911 15:36:29.447442   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-6986c7bc94", UID:"e8dbe7e1-c730-4147-95a0-cc3bab0e91af", APIVersion:"apps/v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-685vp
W0911 15:36:29.894] E0911 15:36:29.490093   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:29.894] E0911 15:36:29.594188   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:29.894] E0911 15:36:29.698202   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:29.895] E0911 15:36:29.797384   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:29.950] I0911 15:36:29.949826   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"e2338ca7-6df5-4ce2-9aa2-a28fbc4dfa7c", APIVersion:"v1", ResourceVersion:"1704", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8fb57
W0911 15:36:29.953] I0911 15:36:29.952674   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"e2338ca7-6df5-4ce2-9aa2-a28fbc4dfa7c", APIVersion:"v1", ResourceVersion:"1704", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xj445
W0911 15:36:29.954] I0911 15:36:29.952719   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"e2338ca7-6df5-4ce2-9aa2-a28fbc4dfa7c", APIVersion:"v1", ResourceVersion:"1704", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9szzz
I0911 15:36:30.054] replicationcontroller/frontend created
I0911 15:36:30.054] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I0911 15:36:30.113] (Bservice/frontend exposed
... skipping 11 lines ...
I0911 15:36:31.102] service "frontend" deleted
I0911 15:36:31.109] service "frontend-2" deleted
I0911 15:36:31.115] service "frontend-3" deleted
I0911 15:36:31.121] service "frontend-4" deleted
I0911 15:36:31.126] service "frontend-5" deleted
I0911 15:36:31.211] Successful
I0911 15:36:31.211] message:error: cannot expose a Node
I0911 15:36:31.211] has:cannot expose
I0911 15:36:31.290] Successful
I0911 15:36:31.291] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0911 15:36:31.291] has:metadata.name: Invalid value
I0911 15:36:31.373] Successful
I0911 15:36:31.373] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 7 lines ...
I0911 15:36:31.759] (Bservice "etcd-server" deleted
I0911 15:36:31.843] core.sh:1215: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
I0911 15:36:31.914] (Breplicationcontroller "frontend" deleted
I0911 15:36:31.992] core.sh:1219: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:32.066] (Bcore.sh:1223: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:32.206] (Breplicationcontroller/frontend created
W0911 15:36:32.306] E0911 15:36:30.491613   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:32.307] E0911 15:36:30.595711   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:32.307] E0911 15:36:30.699716   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:32.307] E0911 15:36:30.798704   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:32.308] E0911 15:36:31.492765   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:32.308] E0911 15:36:31.596857   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:32.308] E0911 15:36:31.700976   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:32.308] E0911 15:36:31.799865   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:32.308] I0911 15:36:32.209308   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"91af4c89-0c0b-4bf1-81a8-4b540719a4f3", APIVersion:"v1", ResourceVersion:"1766", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-txv25
W0911 15:36:32.309] I0911 15:36:32.211635   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"91af4c89-0c0b-4bf1-81a8-4b540719a4f3", APIVersion:"v1", ResourceVersion:"1766", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-txtrz
W0911 15:36:32.309] I0911 15:36:32.211999   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"91af4c89-0c0b-4bf1-81a8-4b540719a4f3", APIVersion:"v1", ResourceVersion:"1766", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vtcbf
W0911 15:36:32.348] I0911 15:36:32.347580   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-slave", UID:"ff92e173-6fb3-4865-bf31-8bb3dcc16675", APIVersion:"v1", ResourceVersion:"1775", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-8bfc6
W0911 15:36:32.350] I0911 15:36:32.350118   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"redis-slave", UID:"ff92e173-6fb3-4865-bf31-8bb3dcc16675", APIVersion:"v1", ResourceVersion:"1775", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-ncjnn
I0911 15:36:32.451] replicationcontroller/redis-slave created
... skipping 8 lines ...
I0911 15:36:33.049] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0911 15:36:33.130] core.sh:1246: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0911 15:36:33.199] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0911 15:36:33.274] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0911 15:36:33.355] core.sh:1250: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0911 15:36:33.424] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0911 15:36:33.525] E0911 15:36:32.493819   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:33.525] E0911 15:36:32.598625   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:33.525] E0911 15:36:32.702354   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:33.525] E0911 15:36:32.801281   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:33.526] I0911 15:36:32.900252   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"af616c74-cf62-47e6-bc5e-5b7f8340c7cb", APIVersion:"v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m9qpx
W0911 15:36:33.526] I0911 15:36:32.902572   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"af616c74-cf62-47e6-bc5e-5b7f8340c7cb", APIVersion:"v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jrllh
W0911 15:36:33.527] I0911 15:36:32.902967   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568216185-3803", Name:"frontend", UID:"af616c74-cf62-47e6-bc5e-5b7f8340c7cb", APIVersion:"v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2mf86
W0911 15:36:33.527] Error: required flag(s) "max" not set
W0911 15:36:33.527] 
W0911 15:36:33.527] 
W0911 15:36:33.527] Examples:
W0911 15:36:33.527]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0911 15:36:33.527]   kubectl autoscale deployment foo --min=2 --max=10
W0911 15:36:33.527]   
... skipping 18 lines ...
W0911 15:36:33.531] 
W0911 15:36:33.531] Usage:
W0911 15:36:33.531]   kubectl autoscale (-f FILENAME | TYPE NAME | TYPE/NAME) [--min=MINPODS] --max=MAXPODS [--cpu-percent=CPU] [options]
W0911 15:36:33.532] 
W0911 15:36:33.532] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0911 15:36:33.532] 
W0911 15:36:33.532] E0911 15:36:33.495127   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:33.600] E0911 15:36:33.599963   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:33.701] replicationcontroller "frontend" deleted
I0911 15:36:33.701] core.sh:1259: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:33.708] (BapiVersion: apps/v1
I0911 15:36:33.708] kind: Deployment
I0911 15:36:33.709] metadata:
I0911 15:36:33.709]   creationTimestamp: null
... skipping 24 lines ...
I0911 15:36:33.711]           limits:
I0911 15:36:33.711]             cpu: 300m
I0911 15:36:33.711]           requests:
I0911 15:36:33.712]             cpu: 300m
I0911 15:36:33.712]       terminationGracePeriodSeconds: 0
I0911 15:36:33.712] status: {}
W0911 15:36:33.812] E0911 15:36:33.703843   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:33.813] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
W0911 15:36:33.813] E0911 15:36:33.802717   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:33.937] deployment.apps/nginx-deployment-resources created
I0911 15:36:34.028] core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0911 15:36:34.108] (Bcore.sh:1266: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:36:34.187] (Bcore.sh:1267: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0911 15:36:34.268] (Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0911 15:36:34.357] core.sh:1270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 85 lines ...
W0911 15:36:35.282] I0911 15:36:33.940704   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources", UID:"364b50a9-8441-4a1d-b0e2-d54b6edb9f68", APIVersion:"apps/v1", ResourceVersion:"1815", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
W0911 15:36:35.282] I0911 15:36:33.943871   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources-67f8cfff5", UID:"bdc46fe6-3384-4590-a752-09f086019e3d", APIVersion:"apps/v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-tbcm7
W0911 15:36:35.283] I0911 15:36:33.946337   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources-67f8cfff5", UID:"bdc46fe6-3384-4590-a752-09f086019e3d", APIVersion:"apps/v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-zc2fv
W0911 15:36:35.284] I0911 15:36:33.946650   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources-67f8cfff5", UID:"bdc46fe6-3384-4590-a752-09f086019e3d", APIVersion:"apps/v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-5fhsd
W0911 15:36:35.284] I0911 15:36:34.271101   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources", UID:"364b50a9-8441-4a1d-b0e2-d54b6edb9f68", APIVersion:"apps/v1", ResourceVersion:"1829", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
W0911 15:36:35.285] I0911 15:36:34.274103   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources-55c547f795", UID:"31402e08-5d8a-4a0d-89c4-4cee37dfb669", APIVersion:"apps/v1", ResourceVersion:"1830", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-m9s9d
W0911 15:36:35.285] E0911 15:36:34.496260   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:35.285] error: unable to find container named redis
W0911 15:36:35.286] I0911 15:36:34.598460   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources", UID:"364b50a9-8441-4a1d-b0e2-d54b6edb9f68", APIVersion:"apps/v1", ResourceVersion:"1839", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
W0911 15:36:35.286] E0911 15:36:34.602169   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:35.287] I0911 15:36:34.605410   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources", UID:"364b50a9-8441-4a1d-b0e2-d54b6edb9f68", APIVersion:"apps/v1", ResourceVersion:"1841", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
W0911 15:36:35.287] I0911 15:36:34.605514   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources-67f8cfff5", UID:"bdc46fe6-3384-4590-a752-09f086019e3d", APIVersion:"apps/v1", ResourceVersion:"1843", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-tbcm7
W0911 15:36:35.288] I0911 15:36:34.609600   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources-6d86564b45", UID:"9bec343a-f237-4052-a1e1-25dc3ef1b5cc", APIVersion:"apps/v1", ResourceVersion:"1846", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-k5m8b
W0911 15:36:35.288] E0911 15:36:34.705086   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:35.288] E0911 15:36:34.804132   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:35.289] I0911 15:36:34.857125   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources", UID:"364b50a9-8441-4a1d-b0e2-d54b6edb9f68", APIVersion:"apps/v1", ResourceVersion:"1860", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 1
W0911 15:36:35.289] I0911 15:36:34.861445   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources-67f8cfff5", UID:"bdc46fe6-3384-4590-a752-09f086019e3d", APIVersion:"apps/v1", ResourceVersion:"1864", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-5fhsd
W0911 15:36:35.290] I0911 15:36:34.862641   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources", UID:"364b50a9-8441-4a1d-b0e2-d54b6edb9f68", APIVersion:"apps/v1", ResourceVersion:"1862", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c478d4fdb to 1
W0911 15:36:35.290] I0911 15:36:34.866310   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216185-3803", Name:"nginx-deployment-resources-6c478d4fdb", UID:"a09ccc8d-05b0-4f32-a08b-45013cc4eb78", APIVersion:"apps/v1", ResourceVersion:"1868", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c478d4fdb-g2jtf
W0911 15:36:35.291] error: you must specify resources by --filename when --local is set.
W0911 15:36:35.291] Example resource specifications include:
W0911 15:36:35.291]    '-f rsrc.yaml'
W0911 15:36:35.291]    '--filename=rsrc.json'
I0911 15:36:35.391] core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0911 15:36:35.404] (Bcore.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0911 15:36:35.482] (Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 23 lines ...
I0911 15:36:36.348] (BSuccessful
I0911 15:36:36.348] message:10
I0911 15:36:36.348] has:10
I0911 15:36:36.418] Successful
I0911 15:36:36.418] message:apps/v1
I0911 15:36:36.418] has:apps/v1
W0911 15:36:36.519] E0911 15:36:35.497530   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:36.519] E0911 15:36:35.603723   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:36.519] E0911 15:36:35.705979   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:36.520] E0911 15:36:35.805220   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:36.520] I0911 15:36:35.839387   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"test-nginx-extensions", UID:"910d3ce3-9b99-4ac6-b2b8-61bdec0f6676", APIVersion:"apps/v1", ResourceVersion:"1897", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5559c76db7 to 1
W0911 15:36:36.521] I0911 15:36:35.844073   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"test-nginx-extensions-5559c76db7", UID:"5ec713d1-2b6d-4c93-8348-b7cff261b0db", APIVersion:"apps/v1", ResourceVersion:"1898", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5559c76db7-qn6jp
W0911 15:36:36.521] I0911 15:36:36.198100   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"test-nginx-apps", UID:"f0feaaa1-2bba-423f-80ac-b1e0f72c2b4b", APIVersion:"apps/v1", ResourceVersion:"1911", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-79b9bd9585 to 1
W0911 15:36:36.521] I0911 15:36:36.200704   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"test-nginx-apps-79b9bd9585", UID:"c68aa716-8b2a-4c90-b90b-ed89969fded2", APIVersion:"apps/v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-apps-79b9bd9585-6qs8h
W0911 15:36:36.522] E0911 15:36:36.498663   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:36.605] E0911 15:36:36.604839   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:36.706] Successful describe rs:
I0911 15:36:36.706] Name:           test-nginx-apps-79b9bd9585
I0911 15:36:36.706] Namespace:      namespace-1568216195-27592
I0911 15:36:36.706] Selector:       app=test-nginx-apps,pod-template-hash=79b9bd9585
I0911 15:36:36.706] Labels:         app=test-nginx-apps
I0911 15:36:36.707]                 pod-template-hash=79b9bd9585
I0911 15:36:36.707] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0911 15:36:36.707]                 deployment.kubernetes.io/max-replicas: 2
I0911 15:36:36.707]                 deployment.kubernetes.io/revision: 1
I0911 15:36:36.707] Controlled By:  Deployment/test-nginx-apps
I0911 15:36:36.708] Replicas:       1 current / 1 desired
I0911 15:36:36.708] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:36.708] Pod Template:
I0911 15:36:36.708]   Labels:  app=test-nginx-apps
I0911 15:36:36.708]            pod-template-hash=79b9bd9585
I0911 15:36:36.708]   Containers:
I0911 15:36:36.708]    nginx:
I0911 15:36:36.709]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 46 lines ...
I0911 15:36:37.835] apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:37.906] (Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:37.976] (Bapps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:38.038] (Bdeployment.apps/nginx-deployment created
I0911 15:36:38.123] apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I0911 15:36:38.188] (Bdeployment.apps "nginx-deployment" deleted
W0911 15:36:38.289] E0911 15:36:36.707148   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:38.290] E0911 15:36:36.806399   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:38.290] I0911 15:36:36.860391   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-with-command", UID:"e0950ac9-c1ca-44f6-9f1d-86ff9919407d", APIVersion:"apps/v1", ResourceVersion:"1925", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-757c6f58dd to 1
W0911 15:36:38.291] I0911 15:36:36.863046   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-with-command-757c6f58dd", UID:"15048391-aa19-4e96-b30c-3be45af85d6c", APIVersion:"apps/v1", ResourceVersion:"1926", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-757c6f58dd-94fmc
W0911 15:36:38.291] I0911 15:36:37.231479   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"deployment-with-unixuserid", UID:"c7d60c86-5fed-4a97-9e56-076437eab5b9", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-8fcdfc94f to 1
W0911 15:36:38.292] I0911 15:36:37.233117   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"deployment-with-unixuserid-8fcdfc94f", UID:"c9438a0f-1fde-4547-af62-408ebc762a06", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-8fcdfc94f-f4mp8
W0911 15:36:38.292] E0911 15:36:37.499752   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:38.292] I0911 15:36:37.602957   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"074065e6-6b1b-430d-9774-2f7a59c0a5ab", APIVersion:"apps/v1", ResourceVersion:"1954", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0911 15:36:38.292] I0911 15:36:37.605634   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-6986c7bc94", UID:"64bacfc4-137e-459b-9bb2-9c91ba44a51f", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-4wzlw
W0911 15:36:38.293] E0911 15:36:37.606013   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:38.293] I0911 15:36:37.608192   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-6986c7bc94", UID:"64bacfc4-137e-459b-9bb2-9c91ba44a51f", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-jkslr
W0911 15:36:38.294] I0911 15:36:37.608814   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-6986c7bc94", UID:"64bacfc4-137e-459b-9bb2-9c91ba44a51f", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-6fnbz
W0911 15:36:38.294] E0911 15:36:37.708338   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:38.294] E0911 15:36:37.807554   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:38.295] I0911 15:36:38.041879   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"d63db54e-9746-42ce-addf-b3653f324097", APIVersion:"apps/v1", ResourceVersion:"1976", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7f6fc565b9 to 1
W0911 15:36:38.295] I0911 15:36:38.044690   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-7f6fc565b9", UID:"fc3f87b6-0f28-4756-8338-325127b21588", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7f6fc565b9-xcsq9
I0911 15:36:38.396] apps.sh:256: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:38.396] (Bapps.sh:257: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
I0911 15:36:38.526] (Breplicaset.apps "nginx-deployment-7f6fc565b9" deleted
I0911 15:36:38.609] apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0911 15:36:39.732] apps.sh:287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:36:39.874] (Bdeployment.apps/nginx configured
I0911 15:36:39.960] apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0911 15:36:40.041] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0911 15:36:40.120] apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0911 15:36:40.202] (Bdeployment.apps/nginx rolled back
W0911 15:36:40.303] E0911 15:36:38.500879   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.304] E0911 15:36:38.607019   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.304] E0911 15:36:38.709379   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.304] I0911 15:36:38.746155   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"e3831560-00d0-4b07-8baa-7346fe894e41", APIVersion:"apps/v1", ResourceVersion:"1994", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0911 15:36:40.305] I0911 15:36:38.748987   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-6986c7bc94", UID:"b43def66-7a57-408e-852b-1ef607a4eecb", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-mrpjz
W0911 15:36:40.305] I0911 15:36:38.751136   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-6986c7bc94", UID:"b43def66-7a57-408e-852b-1ef607a4eecb", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-g7srm
W0911 15:36:40.306] I0911 15:36:38.751688   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-6986c7bc94", UID:"b43def66-7a57-408e-852b-1ef607a4eecb", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-tmmgg
W0911 15:36:40.306] E0911 15:36:38.808658   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.306] I0911 15:36:39.384279   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx", UID:"c3076d50-d8cb-4e38-ab6c-da6c8f68251c", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0911 15:36:40.307] I0911 15:36:39.386421   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-f87d999f7", UID:"a7b42693-d91d-40b2-a570-303f79c61d4c", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-x6cb4
W0911 15:36:40.307] I0911 15:36:39.389473   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-f87d999f7", UID:"a7b42693-d91d-40b2-a570-303f79c61d4c", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-ct68q
W0911 15:36:40.307] I0911 15:36:39.392014   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-f87d999f7", UID:"a7b42693-d91d-40b2-a570-303f79c61d4c", APIVersion:"apps/v1", ResourceVersion:"2021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-c86vq
W0911 15:36:40.307] E0911 15:36:39.503376   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.308] E0911 15:36:39.608384   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.308] E0911 15:36:39.710591   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.308] E0911 15:36:39.809802   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.308] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0911 15:36:40.309] I0911 15:36:39.877127   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx", UID:"c3076d50-d8cb-4e38-ab6c-da6c8f68251c", APIVersion:"apps/v1", ResourceVersion:"2034", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-78487f9fd7 to 1
W0911 15:36:40.309] I0911 15:36:39.879762   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-78487f9fd7", UID:"64a4bd32-50fd-4e68-8abd-ea2d74eb704f", APIVersion:"apps/v1", ResourceVersion:"2035", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-8dv6h
W0911 15:36:40.505] E0911 15:36:40.504432   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.610] E0911 15:36:40.609541   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.712] E0911 15:36:40.711894   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:40.811] E0911 15:36:40.810584   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:41.288] apps.sh:297: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:36:41.446] (Bapps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:36:41.531] (Bdeployment.apps/nginx rolled back
W0911 15:36:41.632] error: unable to find specified revision 1000000 in history
W0911 15:36:41.632] E0911 15:36:41.505658   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:41.633] E0911 15:36:41.610845   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:41.714] E0911 15:36:41.713426   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:41.812] E0911 15:36:41.811815   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:42.508] E0911 15:36:42.507403   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:42.613] E0911 15:36:42.612626   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:42.714] apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0911 15:36:42.761] (Bdeployment.apps/nginx paused
W0911 15:36:42.861] E0911 15:36:42.716152   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:42.862] E0911 15:36:42.813293   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:42.862] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W0911 15:36:42.927] error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
I0911 15:36:43.028] deployment.apps/nginx resumed
I0911 15:36:43.098] deployment.apps/nginx rolled back
I0911 15:36:43.255]     deployment.kubernetes.io/revision-history: 1,3
W0911 15:36:43.429] error: desired revision (3) is different from the running revision (5)
W0911 15:36:43.508] E0911 15:36:43.508279   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:43.522] I0911 15:36:43.522029   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx", UID:"c3076d50-d8cb-4e38-ab6c-da6c8f68251c", APIVersion:"apps/v1", ResourceVersion:"2064", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-78487f9fd7 to 0
W0911 15:36:43.529] I0911 15:36:43.528614   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx", UID:"c3076d50-d8cb-4e38-ab6c-da6c8f68251c", APIVersion:"apps/v1", ResourceVersion:"2066", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7ffc787445 to 1
W0911 15:36:43.531] I0911 15:36:43.530480   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-78487f9fd7", UID:"64a4bd32-50fd-4e68-8abd-ea2d74eb704f", APIVersion:"apps/v1", ResourceVersion:"2067", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-78487f9fd7-8dv6h
W0911 15:36:43.534] I0911 15:36:43.533752   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-7ffc787445", UID:"6d46e05e-c550-4bda-aef5-1b70a8e8015a", APIVersion:"apps/v1", ResourceVersion:"2072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7ffc787445-x9rtv
W0911 15:36:43.614] E0911 15:36:43.614000   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:43.715] deployment.apps/nginx restarted
W0911 15:36:43.816] E0911 15:36:43.717543   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:43.817] E0911 15:36:43.814660   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:44.510] E0911 15:36:44.509806   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:44.615] E0911 15:36:44.615028   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:44.716] Successful
I0911 15:36:44.716] message:apiVersion: apps/v1
I0911 15:36:44.716] kind: ReplicaSet
I0911 15:36:44.717] metadata:
I0911 15:36:44.717]   annotations:
I0911 15:36:44.717]     deployment.kubernetes.io/desired-replicas: "3"
... skipping 75 lines ...
I0911 15:36:46.712] (Bapps.sh:361: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:36:46.863] (Bapps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:36:46.941] (Bapps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0911 15:36:47.008] (Bdeployment.apps "nginx-deployment" deleted
I0911 15:36:47.091] apps.sh:371: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:47.229] (Bdeployment.apps/nginx-deployment created
W0911 15:36:47.330] E0911 15:36:44.718674   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.330] I0911 15:36:44.805559   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx2", UID:"67345172-6791-4209-bf1c-11a8f7d84d3d", APIVersion:"apps/v1", ResourceVersion:"2085", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-57b7865cd9 to 3
W0911 15:36:47.331] I0911 15:36:44.808394   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx2-57b7865cd9", UID:"4a572e2e-567c-4cb9-ac8c-91d146061c07", APIVersion:"apps/v1", ResourceVersion:"2086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-6n2bs
W0911 15:36:47.331] I0911 15:36:44.810389   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx2-57b7865cd9", UID:"4a572e2e-567c-4cb9-ac8c-91d146061c07", APIVersion:"apps/v1", ResourceVersion:"2086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-rrdc9
W0911 15:36:47.331] I0911 15:36:44.811806   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx2-57b7865cd9", UID:"4a572e2e-567c-4cb9-ac8c-91d146061c07", APIVersion:"apps/v1", ResourceVersion:"2086", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-cf97k
W0911 15:36:47.331] E0911 15:36:44.816650   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.332] I0911 15:36:45.194404   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"399a325e-c9f1-47b9-b596-8c4a58206544", APIVersion:"apps/v1", ResourceVersion:"2119", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
W0911 15:36:47.332] I0911 15:36:45.196793   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-598d4d68b4", UID:"e6398452-5e11-4e5e-b9cf-765a3b399e30", APIVersion:"apps/v1", ResourceVersion:"2120", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-w25w6
W0911 15:36:47.332] I0911 15:36:45.199161   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-598d4d68b4", UID:"e6398452-5e11-4e5e-b9cf-765a3b399e30", APIVersion:"apps/v1", ResourceVersion:"2120", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-mnlw8
W0911 15:36:47.332] I0911 15:36:45.199451   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-598d4d68b4", UID:"e6398452-5e11-4e5e-b9cf-765a3b399e30", APIVersion:"apps/v1", ResourceVersion:"2120", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-9przs
W0911 15:36:47.333] I0911 15:36:45.509842   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"399a325e-c9f1-47b9-b596-8c4a58206544", APIVersion:"apps/v1", ResourceVersion:"2134", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
W0911 15:36:47.333] E0911 15:36:45.511020   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.333] I0911 15:36:45.512970   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-59df9b5f5b", UID:"5cb140f8-9f04-4002-b22a-76c298968e50", APIVersion:"apps/v1", ResourceVersion:"2135", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-725vt
W0911 15:36:47.334] E0911 15:36:45.616354   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.334] E0911 15:36:45.719873   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.334] error: unable to find container named "redis"
W0911 15:36:47.334] E0911 15:36:45.818180   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.334] E0911 15:36:46.512323   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.335] I0911 15:36:46.558379   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"399a325e-c9f1-47b9-b596-8c4a58206544", APIVersion:"apps/v1", ResourceVersion:"2152", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
W0911 15:36:47.335] I0911 15:36:46.562886   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"399a325e-c9f1-47b9-b596-8c4a58206544", APIVersion:"apps/v1", ResourceVersion:"2155", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7d758dbc54 to 1
W0911 15:36:47.335] I0911 15:36:46.563392   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-598d4d68b4", UID:"e6398452-5e11-4e5e-b9cf-765a3b399e30", APIVersion:"apps/v1", ResourceVersion:"2156", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-w25w6
W0911 15:36:47.336] I0911 15:36:46.566271   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-7d758dbc54", UID:"645ecd87-cf32-4b23-98b4-c45f5f07e07d", APIVersion:"apps/v1", ResourceVersion:"2159", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7d758dbc54-qcg2q
W0911 15:36:47.336] E0911 15:36:46.617759   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.336] E0911 15:36:46.721059   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.336] E0911 15:36:46.819698   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:47.337] I0911 15:36:47.232560   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"bc061c58-0f2f-48a8-bdd6-9246eddd617c", APIVersion:"apps/v1", ResourceVersion:"2184", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
W0911 15:36:47.337] I0911 15:36:47.234810   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-598d4d68b4", UID:"e5848814-b6b2-45da-b19d-6b64f00d2481", APIVersion:"apps/v1", ResourceVersion:"2185", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-bd9q6
W0911 15:36:47.337] I0911 15:36:47.237665   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-598d4d68b4", UID:"e5848814-b6b2-45da-b19d-6b64f00d2481", APIVersion:"apps/v1", ResourceVersion:"2185", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-kqb2v
W0911 15:36:47.338] I0911 15:36:47.238252   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-598d4d68b4", UID:"e5848814-b6b2-45da-b19d-6b64f00d2481", APIVersion:"apps/v1", ResourceVersion:"2185", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-m66wq
I0911 15:36:47.438] configmap/test-set-env-config created
I0911 15:36:47.520] secret/test-set-env-secret created
... skipping 3 lines ...
I0911 15:36:47.841] (Bdeployment.apps/nginx-deployment env updated
I0911 15:36:47.928] apps.sh:383: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
I0911 15:36:48.005] (Bapps.sh:385: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
I0911 15:36:48.088] (Bdeployment.apps/nginx-deployment env updated
I0911 15:36:48.173] apps.sh:389: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2
I0911 15:36:48.254] (Bdeployment.apps/nginx-deployment env updated
W0911 15:36:48.354] E0911 15:36:47.513685   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:48.355] E0911 15:36:47.619143   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:48.355] E0911 15:36:47.722276   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:48.356] E0911 15:36:47.820866   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:48.356] I0911 15:36:47.843841   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"bc061c58-0f2f-48a8-bdd6-9246eddd617c", APIVersion:"apps/v1", ResourceVersion:"2201", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6b9f7756b4 to 1
W0911 15:36:48.357] I0911 15:36:47.846189   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-6b9f7756b4", UID:"863e7b3f-6194-487a-a170-3636934fb7d2", APIVersion:"apps/v1", ResourceVersion:"2202", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6b9f7756b4-4mmdv
W0911 15:36:48.357] I0911 15:36:48.048389   52874 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568216185-3803
W0911 15:36:48.357] I0911 15:36:48.097042   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"bc061c58-0f2f-48a8-bdd6-9246eddd617c", APIVersion:"apps/v1", ResourceVersion:"2211", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
W0911 15:36:48.358] I0911 15:36:48.103397   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-598d4d68b4", UID:"e5848814-b6b2-45da-b19d-6b64f00d2481", APIVersion:"apps/v1", ResourceVersion:"2215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-bd9q6
W0911 15:36:48.358] I0911 15:36:48.103826   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"bc061c58-0f2f-48a8-bdd6-9246eddd617c", APIVersion:"apps/v1", ResourceVersion:"2214", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-754bf964c8 to 1
... skipping 31 lines ...
I0911 15:36:49.588] (Bapps.sh:521: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:49.741] (Breplicaset.apps/frontend-no-cascade created
I0911 15:36:49.823] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: php-redis:php-redis:
I0911 15:36:49.894] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: php-redis:php-redis:
W0911 15:36:49.995] I0911 15:36:48.494089   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"bc061c58-0f2f-48a8-bdd6-9246eddd617c", APIVersion:"apps/v1", ResourceVersion:"2267", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6b9f7756b4 to 0
W0911 15:36:49.996] I0911 15:36:48.497178   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-5958f7687", UID:"cc182d0a-04b0-4c2e-9781-677d123781b4", APIVersion:"apps/v1", ResourceVersion:"2261", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5958f7687-8n6g5
W0911 15:36:49.996] E0911 15:36:48.514997   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:49.997] E0911 15:36:48.621323   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:49.997] I0911 15:36:48.644266   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment", UID:"bc061c58-0f2f-48a8-bdd6-9246eddd617c", APIVersion:"apps/v1", ResourceVersion:"2272", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-d74969475 to 1
W0911 15:36:49.998] I0911 15:36:48.648489   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-6b9f7756b4", UID:"863e7b3f-6194-487a-a170-3636934fb7d2", APIVersion:"apps/v1", ResourceVersion:"2270", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6b9f7756b4-4mmdv
W0911 15:36:49.998] E0911 15:36:48.723633   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:49.998] E0911 15:36:48.795489   52874 replica_set.go:450] Sync "namespace-1568216195-27592/nginx-deployment-598d4d68b4" failed with replicasets.apps "nginx-deployment-598d4d68b4" not found
W0911 15:36:49.998] E0911 15:36:48.821987   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:49.999] I0911 15:36:48.847118   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216195-27592", Name:"nginx-deployment-d74969475", UID:"a2ce3f0b-b9e8-4559-b8cc-7aae2a2dc3cd", APIVersion:"apps/v1", ResourceVersion:"2276", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-d74969475-nlqk7
W0911 15:36:49.999] E0911 15:36:48.996282   52874 replica_set.go:450] Sync "namespace-1568216195-27592/nginx-deployment-5958f7687" failed with replicasets.apps "nginx-deployment-5958f7687" not found
W0911 15:36:49.999] E0911 15:36:49.145742   52874 replica_set.go:450] Sync "namespace-1568216195-27592/nginx-deployment-6b9f7756b4" failed with replicasets.apps "nginx-deployment-6b9f7756b4" not found
W0911 15:36:49.999] E0911 15:36:49.245642   52874 replica_set.go:450] Sync "namespace-1568216195-27592/nginx-deployment-868b664cb5" failed with replicasets.apps "nginx-deployment-868b664cb5" not found
W0911 15:36:50.000] E0911 15:36:49.297315   52874 replica_set.go:450] Sync "namespace-1568216195-27592/nginx-deployment-d74969475" failed with replicasets.apps "nginx-deployment-d74969475" not found
W0911 15:36:50.000] I0911 15:36:49.349665   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend", UID:"4cfe011d-80ed-4df9-be74-d5c32bbbb2a9", APIVersion:"apps/v1", ResourceVersion:"2308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4s7jc
W0911 15:36:50.000] I0911 15:36:49.447460   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend", UID:"4cfe011d-80ed-4df9-be74-d5c32bbbb2a9", APIVersion:"apps/v1", ResourceVersion:"2308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9k5rn
W0911 15:36:50.000] I0911 15:36:49.496617   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend", UID:"4cfe011d-80ed-4df9-be74-d5c32bbbb2a9", APIVersion:"apps/v1", ResourceVersion:"2308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mxftr
W0911 15:36:50.001] E0911 15:36:49.518005   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:50.001] E0911 15:36:49.622485   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:50.001] E0911 15:36:49.695667   52874 replica_set.go:450] Sync "namespace-1568216208-2363/frontend" failed with replicasets.apps "frontend" not found
W0911 15:36:50.001] E0911 15:36:49.724897   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:50.001] I0911 15:36:49.747269   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend-no-cascade", UID:"6e27be64-2be9-4974-83d8-f4ce9b05c311", APIVersion:"apps/v1", ResourceVersion:"2322", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-kd82h
W0911 15:36:50.002] I0911 15:36:49.797119   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend-no-cascade", UID:"6e27be64-2be9-4974-83d8-f4ce9b05c311", APIVersion:"apps/v1", ResourceVersion:"2322", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-c9p87
W0911 15:36:50.002] E0911 15:36:49.823575   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:50.002] I0911 15:36:49.897560   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend-no-cascade", UID:"6e27be64-2be9-4974-83d8-f4ce9b05c311", APIVersion:"apps/v1", ResourceVersion:"2322", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-lpp5m
W0911 15:36:50.519] E0911 15:36:50.519170   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:50.624] E0911 15:36:50.623714   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:50.726] E0911 15:36:50.726133   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:50.825] E0911 15:36:50.824819   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:50.970] apps.sh:527: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0911 15:36:50.973] (B+++ [0911 15:36:50] Deleting rs
I0911 15:36:51.041] replicaset.apps "frontend-no-cascade" deleted
I0911 15:36:51.146] apps.sh:531: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0911 15:36:51.224] (Bapps.sh:533: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0911 15:36:51.295] (Bpod "frontend-no-cascade-c9p87" deleted
... skipping 8 lines ...
I0911 15:36:51.800] Namespace:    namespace-1568216208-2363
I0911 15:36:51.800] Selector:     app=guestbook,tier=frontend
I0911 15:36:51.800] Labels:       app=guestbook
I0911 15:36:51.800]               tier=frontend
I0911 15:36:51.801] Annotations:  <none>
I0911 15:36:51.801] Replicas:     3 current / 3 desired
I0911 15:36:51.801] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:51.801] Pod Template:
I0911 15:36:51.801]   Labels:  app=guestbook
I0911 15:36:51.801]            tier=frontend
I0911 15:36:51.801]   Containers:
I0911 15:36:51.801]    php-redis:
I0911 15:36:51.802]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0911 15:36:51.892] Namespace:    namespace-1568216208-2363
I0911 15:36:51.892] Selector:     app=guestbook,tier=frontend
I0911 15:36:51.893] Labels:       app=guestbook
I0911 15:36:51.893]               tier=frontend
I0911 15:36:51.893] Annotations:  <none>
I0911 15:36:51.893] Replicas:     3 current / 3 desired
I0911 15:36:51.893] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:51.893] Pod Template:
I0911 15:36:51.893]   Labels:  app=guestbook
I0911 15:36:51.894]            tier=frontend
I0911 15:36:51.894]   Containers:
I0911 15:36:51.894]    php-redis:
I0911 15:36:51.894]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0911 15:36:51.982] Namespace:    namespace-1568216208-2363
I0911 15:36:51.982] Selector:     app=guestbook,tier=frontend
I0911 15:36:51.982] Labels:       app=guestbook
I0911 15:36:51.983]               tier=frontend
I0911 15:36:51.983] Annotations:  <none>
I0911 15:36:51.983] Replicas:     3 current / 3 desired
I0911 15:36:51.983] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:51.984] Pod Template:
I0911 15:36:51.984]   Labels:  app=guestbook
I0911 15:36:51.984]            tier=frontend
I0911 15:36:51.984]   Containers:
I0911 15:36:51.985]    php-redis:
I0911 15:36:51.985]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0911 15:36:52.078] Namespace:    namespace-1568216208-2363
I0911 15:36:52.078] Selector:     app=guestbook,tier=frontend
I0911 15:36:52.079] Labels:       app=guestbook
I0911 15:36:52.079]               tier=frontend
I0911 15:36:52.079] Annotations:  <none>
I0911 15:36:52.079] Replicas:     3 current / 3 desired
I0911 15:36:52.079] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:52.079] Pod Template:
I0911 15:36:52.079]   Labels:  app=guestbook
I0911 15:36:52.079]            tier=frontend
I0911 15:36:52.079]   Containers:
I0911 15:36:52.079]    php-redis:
I0911 15:36:52.080]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
I0911 15:36:52.080]   Type    Reason            Age   From                   Message
I0911 15:36:52.081]   ----    ------            ----  ----                   -------
I0911 15:36:52.081]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-t9wnb
I0911 15:36:52.081]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-nxsjr
I0911 15:36:52.081]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-9cf8s
I0911 15:36:52.081] (B
W0911 15:36:52.181] E0911 15:36:51.520285   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:52.182] I0911 15:36:51.596288   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend", UID:"676f4719-6a50-46d9-b332-53fde338cefd", APIVersion:"apps/v1", ResourceVersion:"2346", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t9wnb
W0911 15:36:52.182] I0911 15:36:51.598559   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend", UID:"676f4719-6a50-46d9-b332-53fde338cefd", APIVersion:"apps/v1", ResourceVersion:"2346", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nxsjr
W0911 15:36:52.182] I0911 15:36:51.598588   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend", UID:"676f4719-6a50-46d9-b332-53fde338cefd", APIVersion:"apps/v1", ResourceVersion:"2346", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9cf8s
W0911 15:36:52.183] E0911 15:36:51.624988   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:52.183] E0911 15:36:51.727625   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:52.183] E0911 15:36:51.826021   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0911 15:36:52.283] Successful describe rs:
I0911 15:36:52.284] Name:         frontend
I0911 15:36:52.284] Namespace:    namespace-1568216208-2363
I0911 15:36:52.284] Selector:     app=guestbook,tier=frontend
I0911 15:36:52.284] Labels:       app=guestbook
I0911 15:36:52.284]               tier=frontend
I0911 15:36:52.284] Annotations:  <none>
I0911 15:36:52.284] Replicas:     3 current / 3 desired
I0911 15:36:52.285] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:52.285] Pod Template:
I0911 15:36:52.285]   Labels:  app=guestbook
I0911 15:36:52.285]            tier=frontend
I0911 15:36:52.285]   Containers:
I0911 15:36:52.285]    php-redis:
I0911 15:36:52.285]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0911 15:36:52.288] Namespace:    namespace-1568216208-2363
I0911 15:36:52.288] Selector:     app=guestbook,tier=frontend
I0911 15:36:52.288] Labels:       app=guestbook
I0911 15:36:52.288]               tier=frontend
I0911 15:36:52.288] Annotations:  <none>
I0911 15:36:52.288] Replicas:     3 current / 3 desired
I0911 15:36:52.288] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:52.288] Pod Template:
I0911 15:36:52.289]   Labels:  app=guestbook
I0911 15:36:52.289]            tier=frontend
I0911 15:36:52.289]   Containers:
I0911 15:36:52.289]    php-redis:
I0911 15:36:52.289]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0911 15:36:52.381] Namespace:    namespace-1568216208-2363
I0911 15:36:52.381] Selector:     app=guestbook,tier=frontend
I0911 15:36:52.381] Labels:       app=guestbook
I0911 15:36:52.381]               tier=frontend
I0911 15:36:52.381] Annotations:  <none>
I0911 15:36:52.381] Replicas:     3 current / 3 desired
I0911 15:36:52.382] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:52.382] Pod Template:
I0911 15:36:52.382]   Labels:  app=guestbook
I0911 15:36:52.382]            tier=frontend
I0911 15:36:52.383]   Containers:
I0911 15:36:52.383]    php-redis:
I0911 15:36:52.383]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0911 15:36:52.472] Namespace:    namespace-1568216208-2363
I0911 15:36:52.472] Selector:     app=guestbook,tier=frontend
I0911 15:36:52.472] Labels:       app=guestbook
I0911 15:36:52.472]               tier=frontend
I0911 15:36:52.472] Annotations:  <none>
I0911 15:36:52.472] Replicas:     3 current / 3 desired
I0911 15:36:52.472] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0911 15:36:52.473] Pod Template:
I0911 15:36:52.473]   Labels:  app=guestbook
I0911 15:36:52.473]            tier=frontend
I0911 15:36:52.473]   Containers:
I0911 15:36:52.473]    php-redis:
I0911 15:36:52.473]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 99 lines ...
I0911 15:36:52.597] Tolerations:           <none>
I0911 15:36:52.597] Events:                <none>
I0911 15:36:52.659] (Bapps.sh:566: Successful get rs frontend {{.spec.replicas}}: 3
I0911 15:36:52.733] (Breplicaset.apps/frontend scaled
I0911 15:36:52.811] apps.sh:570: Successful get rs frontend {{.spec.replicas}}: 2
I0911 15:36:52.942] (Bdeployment.apps/scale-1 created
W0911 15:36:53.043] E0911 15:36:52.521433   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:53.044] E0911 15:36:52.626171   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:53.044] E0911 15:36:52.728861   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:53.044] I0911 15:36:52.738221   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"frontend", UID:"676f4719-6a50-46d9-b332-53fde338cefd", APIVersion:"apps/v1", ResourceVersion:"2355", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-t9wnb
W0911 15:36:53.044] E0911 15:36:52.827103   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:53.045] I0911 15:36:52.944810   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216208-2363", Name:"scale-1", UID:"1fae414f-4282-4bd7-8620-bb62e2fbfc22", APIVersion:"apps/v1", ResourceVersion:"2361", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 1
W0911 15:36:53.045] I0911 15:36:52.947816   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"scale-1-5c5565bcd9", UID:"8d6f74e0-77c1-40c8-bfc0-8ff2b649fb9c", APIVersion:"apps/v1", ResourceVersion:"2362", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-4j59b
W0911 15:36:53.085] I0911 15:36:53.084670   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216208-2363", Name:"scale-2", UID:"24b091fc-796d-4541-8a63-f74278e3440b", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 1
W0911 15:36:53.088] I0911 15:36:53.087309   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"scale-2-5c5565bcd9", UID:"e10b0724-e419-4746-9b8c-0fdb3ed43761", APIVersion:"apps/v1", ResourceVersion:"2372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-gn9g4
I0911 15:36:53.188] deployment.apps/scale-2 created
I0911 15:36:53.233] deployment.apps/scale-3 created
... skipping 14 lines ...
I0911 15:36:54.168] (Breplicaset.apps "frontend" deleted
I0911 15:36:54.243] deployment.apps "scale-1" deleted
I0911 15:36:54.246] deployment.apps "scale-2" deleted
I0911 15:36:54.249] deployment.apps "scale-3" deleted
W0911 15:36:54.350] I0911 15:36:53.235562   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216208-2363", Name:"scale-3", UID:"d4bb0106-74c5-4266-b38e-2c97bdb9aaa1", APIVersion:"apps/v1", ResourceVersion:"2381", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 1
W0911 15:36:54.351] I0911 15:36:53.238157   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"scale-3-5c5565bcd9", UID:"b90a7e59-c06a-4b63-8d27-d2c9df2e3d8a", APIVersion:"apps/v1", ResourceVersion:"2382", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-gwd77
W0911 15:36:54.351] E0911 15:36:53.522818   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:54.352] I0911 15:36:53.542647   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216208-2363", Name:"scale-1", UID:"1fae414f-4282-4bd7-8620-bb62e2fbfc22", APIVersion:"apps/v1", ResourceVersion:"2392", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 2
W0911 15:36:54.352] I0911 15:36:53.546142   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"scale-1-5c5565bcd9", UID:"8d6f74e0-77c1-40c8-bfc0-8ff2b649fb9c", APIVersion:"apps/v1", ResourceVersion:"2393", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-9d27w
W0911 15:36:54.353] I0911 15:36:53.548860   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216208-2363", Name:"scale-2", UID:"24b091fc-796d-4541-8a63-f74278e3440b", APIVersion:"apps/v1", ResourceVersion:"2394", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 2
W0911 15:36:54.353] I0911 15:36:53.552615   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"scale-2-5c5565bcd9", UID:"e10b0724-e419-4746-9b8c-0fdb3ed43761", APIVersion:"apps/v1", ResourceVersion:"2397", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-dbvdf
W0911 15:36:54.354] E0911 15:36:53.627286   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:54.354] E0911 15:36:53.730214   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:54.354] E0911 15:36:53.828342   52874 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0911 15:36:54.355] I0911 15:36:53.856842   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216208-2363", Name:"scale-1", UID:"1fae414f-4282-4bd7-8620-bb62e2fbfc22", APIVersion:"apps/v1", ResourceVersion:"2412", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 3
W0911 15:36:54.355] I0911 15:36:53.860847   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"scale-1-5c5565bcd9", UID:"8d6f74e0-77c1-40c8-bfc0-8ff2b649fb9c", APIVersion:"apps/v1", ResourceVersion:"2413", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-wbhdf
W0911 15:36:54.356] I0911 15:36:53.864059   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216208-2363", Name:"scale-2", UID:"24b091fc-796d-4541-8a63-f74278e3440b", APIVersion:"apps/v1", ResourceVersion:"2414", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 3
W0911 15:36:54.356] I0911 15:36:53.867649   52874 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568216208-2363", Name:"scale-2-5c5565bcd9", UID:"e10b0724-e419-4746-9b8c-0fdb3ed43761", APIVersion:"apps/v1", ResourceVersion:"2420", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-drsg2
W0911 15:36:54.357] I0911 15:36:53.868311   52874 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568216208-2363", Name:"scale-3", UID:"d4bb0106-74c5-4266-b38e-2c97bdb9aaa1", APIVersion:"apps/v1", ResourceVersion:"2418", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 3
W0911 15:36:54.357] I0911 15:36:53.871569   52874 e