This job view page is being replaced by Spyglass soon. Check out the new job view.
PRhasheddan: Make CustomResourceDefinitionStatus fields +optional
ResultFAILURE
Tests 1 failed / 2610 succeeded
Started2020-01-14 22:27
Elapsed28m7s
Revisionee1f0b8a381daf0075bf01ece29857b721aebb34
Refs 87213

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPostBindPlugin 4.13s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPostBindPlugin$
=== RUN   TestPostBindPlugin
W0114 22:50:45.648946  109816 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0114 22:50:45.648971  109816 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0114 22:50:45.648986  109816 master.go:308] Node port range unspecified. Defaulting to 30000-32767.
I0114 22:50:45.648997  109816 master.go:264] Using reconciler: 
I0114 22:50:45.650733  109816 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.651016  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.651112  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.652403  109816 store.go:1350] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0114 22:50:45.652456  109816 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.652754  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.652778  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.652869  109816 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0114 22:50:45.654217  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.654639  109816 store.go:1350] Monitoring events count at <storage-prefix>//events
I0114 22:50:45.654694  109816 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.654715  109816 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I0114 22:50:45.655118  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.655148  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.655703  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.656011  109816 store.go:1350] Monitoring limitranges count at <storage-prefix>//limitranges
I0114 22:50:45.656065  109816 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0114 22:50:45.656186  109816 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.656374  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.656399  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.656911  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.657594  109816 store.go:1350] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0114 22:50:45.657777  109816 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0114 22:50:45.657784  109816 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.657992  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.658026  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.658522  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.658917  109816 store.go:1350] Monitoring secrets count at <storage-prefix>//secrets
I0114 22:50:45.659060  109816 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0114 22:50:45.659882  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.659933  109816 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.660147  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.660180  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.661000  109816 store.go:1350] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0114 22:50:45.661181  109816 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0114 22:50:45.661253  109816 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.662062  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.662966  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.662995  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.663770  109816 store.go:1350] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0114 22:50:45.663860  109816 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0114 22:50:45.663948  109816 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.664318  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.664418  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.665687  109816 store.go:1350] Monitoring configmaps count at <storage-prefix>//configmaps
I0114 22:50:45.665783  109816 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0114 22:50:45.666086  109816 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.666359  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.666479  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.671616  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.671883  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.687930  109816 store.go:1350] Monitoring namespaces count at <storage-prefix>//namespaces
I0114 22:50:45.688452  109816 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.688930  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.688061  109816 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0114 22:50:45.689381  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.701917  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.729652  109816 store.go:1350] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0114 22:50:45.729866  109816 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.730022  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.730044  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.730151  109816 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0114 22:50:45.732118  109816 store.go:1350] Monitoring nodes count at <storage-prefix>//minions
I0114 22:50:45.732306  109816 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I0114 22:50:45.732672  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.732883  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.732922  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.734182  109816 store.go:1350] Monitoring pods count at <storage-prefix>//pods
I0114 22:50:45.734246  109816 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I0114 22:50:45.734583  109816 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.734226  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.734711  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.734729  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.735460  109816 store.go:1350] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0114 22:50:45.735539  109816 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0114 22:50:45.735683  109816 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.735826  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.735844  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.735915  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.736202  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.736496  109816 store.go:1350] Monitoring services count at <storage-prefix>//services/specs
I0114 22:50:45.736584  109816 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0114 22:50:45.736662  109816 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.737093  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.737129  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.737286  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.737750  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.738126  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.738153  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.738773  109816 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.738902  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.738925  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.739624  109816 store.go:1350] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0114 22:50:45.739647  109816 rest.go:113] the default service ipfamily for this cluster is: IPv4
I0114 22:50:45.739707  109816 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0114 22:50:45.740149  109816 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.740353  109816 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.741033  109816 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.741670  109816 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.742397  109816 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.743080  109816 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.743501  109816 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.743641  109816 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.743844  109816 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.744284  109816 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.744844  109816 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.745125  109816 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.745827  109816 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.746107  109816 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.746698  109816 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.747025  109816 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.747703  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.747989  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.748205  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.748409  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.748655  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.748886  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.749152  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.749911  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.750909  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.751914  109816 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.752742  109816 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.755095  109816 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.755440  109816 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.755826  109816 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.762304  109816 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.762564  109816 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.763153  109816 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.763775  109816 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.764296  109816 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.765401  109816 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.765660  109816 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.765775  109816 master.go:488] Skipping disabled API group "auditregistration.k8s.io".
I0114 22:50:45.765901  109816 master.go:499] Enabling API group "authentication.k8s.io".
I0114 22:50:45.765986  109816 master.go:499] Enabling API group "authorization.k8s.io".
I0114 22:50:45.766208  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.766463  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.766569  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.768617  109816 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 22:50:45.768684  109816 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 22:50:45.768795  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.768935  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.768955  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.786791  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.786815  109816 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 22:50:45.786970  109816 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 22:50:45.787014  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.787150  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.787171  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.788000  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.795660  109816 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 22:50:45.795690  109816 master.go:499] Enabling API group "autoscaling".
I0114 22:50:45.795764  109816 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 22:50:45.795895  109816 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.796088  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.796112  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.796884  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.797386  109816 store.go:1350] Monitoring jobs.batch count at <storage-prefix>//jobs
I0114 22:50:45.797419  109816 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0114 22:50:45.797571  109816 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.797718  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.797742  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.798488  109816 store.go:1350] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0114 22:50:45.798518  109816 master.go:499] Enabling API group "batch".
I0114 22:50:45.798691  109816 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.798826  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.798851  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.798918  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.798966  109816 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0114 22:50:45.801586  109816 watch_cache.go:409] Replace watchCache (rev: 28419) 
I0114 22:50:45.801667  109816 store.go:1350] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0114 22:50:45.801694  109816 master.go:499] Enabling API group "certificates.k8s.io".
I0114 22:50:45.801871  109816 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.801895  109816 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0114 22:50:45.802025  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.802045  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.802640  109816 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0114 22:50:45.802804  109816 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0114 22:50:45.802820  109816 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.802955  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.802974  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.807550  109816 watch_cache.go:409] Replace watchCache (rev: 28421) 
I0114 22:50:45.807816  109816 watch_cache.go:409] Replace watchCache (rev: 28421) 
I0114 22:50:45.808680  109816 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0114 22:50:45.808700  109816 master.go:499] Enabling API group "coordination.k8s.io".
I0114 22:50:45.808730  109816 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0114 22:50:45.808872  109816 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.809044  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.809066  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.809915  109816 watch_cache.go:409] Replace watchCache (rev: 28421) 
I0114 22:50:45.834393  109816 store.go:1350] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0114 22:50:45.834427  109816 master.go:499] Enabling API group "discovery.k8s.io".
I0114 22:50:45.834631  109816 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.834786  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.834813  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.834909  109816 reflector.go:188] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0114 22:50:45.836348  109816 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0114 22:50:45.836369  109816 master.go:499] Enabling API group "extensions".
I0114 22:50:45.836533  109816 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.836669  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.836688  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.836784  109816 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0114 22:50:45.841621  109816 store.go:1350] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0114 22:50:45.841792  109816 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.841952  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.841972  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.842048  109816 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0114 22:50:45.842594  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.842743  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.843178  109816 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0114 22:50:45.843196  109816 master.go:499] Enabling API group "networking.k8s.io".
I0114 22:50:45.843341  109816 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.843452  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.843468  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.843547  109816 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0114 22:50:45.844538  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.845394  109816 store.go:1350] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0114 22:50:45.845411  109816 master.go:499] Enabling API group "node.k8s.io".
I0114 22:50:45.845554  109816 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.845690  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.845705  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.845795  109816 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0114 22:50:45.847115  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.847205  109816 store.go:1350] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0114 22:50:45.847346  109816 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.847403  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.847458  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.847472  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.847574  109816 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0114 22:50:45.848697  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.848875  109816 store.go:1350] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0114 22:50:45.848892  109816 master.go:499] Enabling API group "policy".
I0114 22:50:45.848937  109816 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.849066  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.849076  109816 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0114 22:50:45.849085  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.849847  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.850542  109816 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0114 22:50:45.850578  109816 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0114 22:50:45.850773  109816 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.850992  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.851010  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.851461  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.851616  109816 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0114 22:50:45.851663  109816 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.851684  109816 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0114 22:50:45.851794  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.851812  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.852565  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.852874  109816 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0114 22:50:45.853125  109816 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0114 22:50:45.853121  109816 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.853360  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.853385  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.854307  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.854905  109816 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0114 22:50:45.855004  109816 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0114 22:50:45.855304  109816 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.856051  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.856354  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.856377  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.857806  109816 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0114 22:50:45.857958  109816 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.858068  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.858083  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.858087  109816 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0114 22:50:45.858793  109816 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0114 22:50:45.858836  109816 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.858948  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.858959  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.858965  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.859035  109816 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0114 22:50:45.859544  109816 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0114 22:50:45.859681  109816 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.859813  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.859838  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.859912  109816 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0114 22:50:45.860393  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.861139  109816 watch_cache.go:409] Replace watchCache (rev: 28424) 
I0114 22:50:45.861435  109816 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0114 22:50:45.861473  109816 master.go:499] Enabling API group "rbac.authorization.k8s.io".
I0114 22:50:45.863232  109816 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.863387  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.863415  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.863500  109816 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0114 22:50:45.866711  109816 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0114 22:50:45.866820  109816 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0114 22:50:45.866881  109816 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.867007  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.867024  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.869761  109816 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0114 22:50:45.869796  109816 master.go:499] Enabling API group "scheduling.k8s.io".
I0114 22:50:45.869814  109816 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0114 22:50:45.869932  109816 master.go:488] Skipping disabled API group "settings.k8s.io".
I0114 22:50:45.870228  109816 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.870460  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.870495  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.871042  109816 watch_cache.go:409] Replace watchCache (rev: 28425) 
I0114 22:50:45.871229  109816 watch_cache.go:409] Replace watchCache (rev: 28425) 
I0114 22:50:45.874920  109816 watch_cache.go:409] Replace watchCache (rev: 28425) 
I0114 22:50:45.874940  109816 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0114 22:50:45.875105  109816 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.875155  109816 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0114 22:50:45.875245  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.875263  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.875864  109816 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0114 22:50:45.876035  109816 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.876171  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.876191  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.876286  109816 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0114 22:50:45.876356  109816 watch_cache.go:409] Replace watchCache (rev: 28425) 
I0114 22:50:45.876992  109816 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0114 22:50:45.877343  109816 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.877414  109816 watch_cache.go:409] Replace watchCache (rev: 28425) 
I0114 22:50:45.877445  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.877459  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.877550  109816 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0114 22:50:45.878171  109816 store.go:1350] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0114 22:50:45.878323  109816 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0114 22:50:45.878317  109816 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.878533  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.878554  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.879303  109816 watch_cache.go:409] Replace watchCache (rev: 28426) 
I0114 22:50:45.879683  109816 watch_cache.go:409] Replace watchCache (rev: 28426) 
I0114 22:50:45.879705  109816 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0114 22:50:45.879686  109816 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0114 22:50:45.879876  109816 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.880043  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.880073  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.880593  109816 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0114 22:50:45.880684  109816 watch_cache.go:409] Replace watchCache (rev: 28426) 
I0114 22:50:45.880762  109816 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.880890  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.880910  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.880920  109816 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0114 22:50:45.882111  109816 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0114 22:50:45.882130  109816 master.go:499] Enabling API group "storage.k8s.io".
I0114 22:50:45.882145  109816 master.go:488] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I0114 22:50:45.882155  109816 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0114 22:50:45.882304  109816 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.882423  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.882441  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.883067  109816 store.go:1350] Monitoring deployments.apps count at <storage-prefix>//deployments
I0114 22:50:45.883191  109816 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0114 22:50:45.883231  109816 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.883350  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.883372  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.883413  109816 watch_cache.go:409] Replace watchCache (rev: 28426) 
I0114 22:50:45.883413  109816 watch_cache.go:409] Replace watchCache (rev: 28426) 
I0114 22:50:45.883987  109816 store.go:1350] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0114 22:50:45.884110  109816 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0114 22:50:45.884499  109816 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.884826  109816 watch_cache.go:409] Replace watchCache (rev: 28427) 
I0114 22:50:45.884953  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.884974  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.886481  109816 store.go:1350] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0114 22:50:45.886643  109816 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.886774  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.886777  109816 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0114 22:50:45.886801  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.887440  109816 store.go:1350] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0114 22:50:45.887584  109816 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.887697  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.887712  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.887844  109816 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0114 22:50:45.888018  109816 watch_cache.go:409] Replace watchCache (rev: 28427) 
I0114 22:50:45.888875  109816 store.go:1350] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0114 22:50:45.888894  109816 master.go:499] Enabling API group "apps".
I0114 22:50:45.888931  109816 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0114 22:50:45.889398  109816 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.889540  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.889557  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.889602  109816 watch_cache.go:409] Replace watchCache (rev: 28428) 
I0114 22:50:45.890286  109816 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0114 22:50:45.890441  109816 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.890625  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.890647  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.890726  109816 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0114 22:50:45.890759  109816 watch_cache.go:409] Replace watchCache (rev: 28428) 
I0114 22:50:45.891349  109816 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0114 22:50:45.891493  109816 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.891626  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.891647  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.891672  109816 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0114 22:50:45.891687  109816 watch_cache.go:409] Replace watchCache (rev: 28428) 
I0114 22:50:45.892057  109816 watch_cache.go:409] Replace watchCache (rev: 28427) 
I0114 22:50:45.892507  109816 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0114 22:50:45.892668  109816 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.892791  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.892811  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.892901  109816 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0114 22:50:45.893062  109816 watch_cache.go:409] Replace watchCache (rev: 28428) 
I0114 22:50:45.894003  109816 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0114 22:50:45.894025  109816 master.go:499] Enabling API group "admissionregistration.k8s.io".
I0114 22:50:45.894061  109816 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0114 22:50:45.894097  109816 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.894491  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:45.894522  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:45.895051  109816 store.go:1350] Monitoring events count at <storage-prefix>//events
I0114 22:50:45.895178  109816 master.go:499] Enabling API group "events.k8s.io".
I0114 22:50:45.895243  109816 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I0114 22:50:45.895500  109816 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.895781  109816 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.896089  109816 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.896243  109816 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.896312  109816 watch_cache.go:409] Replace watchCache (rev: 28429) 
I0114 22:50:45.896369  109816 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.896487  109816 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.896617  109816 watch_cache.go:409] Replace watchCache (rev: 28429) 
I0114 22:50:45.896692  109816 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.896802  109816 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.896932  109816 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.897244  109816 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.898067  109816 watch_cache.go:409] Replace watchCache (rev: 28429) 
I0114 22:50:45.898156  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.898497  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.899350  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.899638  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.900453  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.900725  109816 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.901572  109816 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.901833  109816 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.902800  109816 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.903063  109816 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:50:45.903110  109816 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0114 22:50:45.903730  109816 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.903861  109816 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.904077  109816 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.904839  109816 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.905766  109816 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.906653  109816 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:50:45.906734  109816 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I0114 22:50:45.907460  109816 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.907758  109816 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.908671  109816 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.909528  109816 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.909790  109816 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.910424  109816 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:50:45.910495  109816 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0114 22:50:45.911399  109816 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.911756  109816 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.912310  109816 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.912948  109816 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.913657  109816 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.914354  109816 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.915035  109816 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.915628  109816 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.916177  109816 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.917065  109816 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.917812  109816 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:50:45.917903  109816 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0114 22:50:45.918662  109816 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.919282  109816 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:50:45.919373  109816 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0114 22:50:45.920090  109816 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.920676  109816 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.921230  109816 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.921501  109816 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.922093  109816 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.922535  109816 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.923067  109816 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.924357  109816 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:50:45.924470  109816 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0114 22:50:45.925363  109816 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.926177  109816 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.926629  109816 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.927386  109816 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.927650  109816 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.927922  109816 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.928656  109816 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.928946  109816 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.929269  109816 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.929987  109816 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.930259  109816 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.930524  109816 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:50:45.930593  109816 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0114 22:50:45.930615  109816 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0114 22:50:45.931277  109816 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.931876  109816 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.932621  109816 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.933282  109816 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 22:50:45.934124  109816 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"40069fa6-23f9-4eed-b278-5712abf5fb60", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 22:50:45.937712  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:50:45.937820  109816 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0114 22:50:45.937833  109816 shared_informer.go:206] Waiting for caches to sync for cluster_authentication_trust_controller
I0114 22:50:45.938035  109816 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0114 22:50:45.938051  109816 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0114 22:50:45.938988  109816 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (448.046µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55558]
I0114 22:50:45.939761  109816 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=28419 labels= fields= timeout=9m37s
I0114 22:50:45.940025  109816 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.500916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:45.940772  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:45.940790  109816 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0114 22:50:45.940801  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:45.940810  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:45.940819  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:45.940841  109816 httplog.go:90] GET /healthz: (160.371µs) 0 [Go-http-client/1.1 127.0.0.1:55556]
I0114 22:50:45.942627  109816 httplog.go:90] GET /api/v1/services: (1.111631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:45.952777  109816 httplog.go:90] GET /api/v1/services: (3.273275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:45.957487  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:45.957526  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:45.957539  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:45.957548  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:45.957574  109816 httplog.go:90] GET /healthz: (210.19µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55562]
I0114 22:50:45.961878  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.519043ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:45.961897  109816 httplog.go:90] GET /api/v1/services: (2.300668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55562]
I0114 22:50:45.963619  109816 httplog.go:90] GET /api/v1/services: (2.708715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:45.964286  109816 httplog.go:90] POST /api/v1/namespaces: (2.000606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55562]
I0114 22:50:45.967229  109816 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.515667ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:45.970683  109816 httplog.go:90] POST /api/v1/namespaces: (2.49331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:45.975069  109816 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (4.058072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:45.976753  109816 httplog.go:90] POST /api/v1/namespaces: (1.345987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.038011  109816 shared_informer.go:236] caches populated
I0114 22:50:46.038042  109816 shared_informer.go:213] Caches are synced for cluster_authentication_trust_controller 
I0114 22:50:46.041642  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.041682  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.041693  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.041701  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.041734  109816 httplog.go:90] GET /healthz: (213.651µs) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.058376  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.058416  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.058438  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.058448  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.058487  109816 httplog.go:90] GET /healthz: (255.314µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.141626  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.141662  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.141674  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.141682  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.141717  109816 httplog.go:90] GET /healthz: (212.492µs) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.158361  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.158440  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.158452  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.158461  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.158504  109816 httplog.go:90] GET /healthz: (282.35µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.241634  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.241672  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.241683  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.241692  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.241742  109816 httplog.go:90] GET /healthz: (249.72µs) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.258356  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.258400  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.258412  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.258423  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.258463  109816 httplog.go:90] GET /healthz: (238.899µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.341626  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.341663  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.341678  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.341687  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.341729  109816 httplog.go:90] GET /healthz: (240.272µs) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.359786  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.359821  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.359832  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.359841  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.359868  109816 httplog.go:90] GET /healthz: (214.362µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.441694  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.441729  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.441740  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.441749  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.441791  109816 httplog.go:90] GET /healthz: (228.704µs) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.458364  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.458407  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.458418  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.458427  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.458451  109816 httplog.go:90] GET /healthz: (239.486µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.541628  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.541666  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.541679  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.541688  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.541724  109816 httplog.go:90] GET /healthz: (221.341µs) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.558260  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.558305  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.558318  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.558327  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.558353  109816 httplog.go:90] GET /healthz: (214.92µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.641642  109816 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 22:50:46.641680  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.641691  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.641711  109816 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.641750  109816 httplog.go:90] GET /healthz: (247.705µs) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.652108  109816 client.go:361] parsed scheme: "endpoint"
I0114 22:50:46.652190  109816 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:46.659263  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.659337  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.659349  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.659383  109816 httplog.go:90] GET /healthz: (1.24774ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.742605  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.742631  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.742640  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.742674  109816 httplog.go:90] GET /healthz: (1.165314ms) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.759194  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.759222  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.759249  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.759299  109816 httplog.go:90] GET /healthz: (1.104297ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.843896  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.843929  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.843939  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.843975  109816 httplog.go:90] GET /healthz: (2.046577ms) 0 [Go-http-client/1.1 127.0.0.1:55564]
I0114 22:50:46.859204  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.859241  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.859251  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.859291  109816 httplog.go:90] GET /healthz: (1.014726ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.939070  109816 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (1.321875ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.939125  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.364124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.940622  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.008599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.941973  109816 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (2.372496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.942162  109816 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0114 22:50:46.943122  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.943148  109816 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 22:50:46.943158  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.943188  109816 httplog.go:90] GET /healthz: (1.466658ms) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:46.943293  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.99684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.943310  109816 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (956.906µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55564]
I0114 22:50:46.946858  109816 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (3.166078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:46.947089  109816 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0114 22:50:46.947119  109816 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0114 22:50:46.949340  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (5.756077ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.950884  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.237828ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.952133  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (775.703µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.953462  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.059169ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.954572  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (815.225µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.955855  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (837.263µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.958043  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.770641ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.959172  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:46.959198  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:46.959227  109816 httplog.go:90] GET /healthz: (769.444µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:46.960107  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.41162ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.960256  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0114 22:50:46.961622  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.168204ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.963370  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.340873ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.963576  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0114 22:50:46.966058  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (2.28067ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.968517  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.965842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.968950  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0114 22:50:46.970342  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (901.182µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.972073  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.386633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.972415  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0114 22:50:46.973588  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.025213ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.975862  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.827135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.976052  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0114 22:50:46.977148  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (806.824µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.979281  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.631302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.979515  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0114 22:50:46.984036  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (4.34051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.989681  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.227335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.989983  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0114 22:50:46.991125  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (913.337µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.993350  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.832939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.995298  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0114 22:50:46.996543  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.017579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:46.999660  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.254898ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.000023  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0114 22:50:47.001447  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.171106ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.003976  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.005188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.004327  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0114 22:50:47.005661  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.101526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.007566  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.463514ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.007748  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0114 22:50:47.008893  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (956.375µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.011288  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.576086ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.011626  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0114 22:50:47.013349  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.469088ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.015392  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.605778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.015609  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0114 22:50:47.016794  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (962.947µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.019465  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.006095ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.019797  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0114 22:50:47.020966  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (917.156µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.024483  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.809385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.025303  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0114 22:50:47.026413  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (903.954µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.028607  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.779141ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.028862  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0114 22:50:47.030305  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.071029ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.033439  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.647764ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.033879  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0114 22:50:47.035155  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (991.467µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.037895  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.298819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.039017  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0114 22:50:47.040474  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.144808ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.042077  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.042108  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.042136  109816 httplog.go:90] GET /healthz: (779.477µs) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:47.043366  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.393768ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.043684  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0114 22:50:47.044827  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (971.016µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.046737  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.307351ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.047042  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0114 22:50:47.048195  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (876.952µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.050071  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458432ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.050249  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0114 22:50:47.051287  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (876.555µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.053634  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.782471ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.054065  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0114 22:50:47.060322  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.060369  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.060401  109816 httplog.go:90] GET /healthz: (2.252142ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.060776  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (6.035104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.063523  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.017078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.063768  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0114 22:50:47.064950  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (970.479µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.068886  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.549181ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.069288  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0114 22:50:47.071551  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (2.077528ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.073950  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.998294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.074436  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0114 22:50:47.075674  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (940.22µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.078569  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.382784ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.078823  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0114 22:50:47.080028  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (935.86µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.082485  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.030663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.082669  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0114 22:50:47.084755  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.893821ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.087576  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.061442ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.087899  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0114 22:50:47.097837  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (9.61917ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.101291  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.8917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.101756  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0114 22:50:47.103208  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (878.037µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.106574  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.914572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.106914  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0114 22:50:47.109695  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.414168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.111524  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.472486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.111724  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0114 22:50:47.112738  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (812.046µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.114568  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.436244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.114850  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0114 22:50:47.116757  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.682344ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.119054  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.756299ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.119298  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0114 22:50:47.121166  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.395366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.123235  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.547021ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.123484  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0114 22:50:47.125304  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.60038ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.127342  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.650849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.127546  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0114 22:50:47.131186  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (835.531µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.133201  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.571135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.133417  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0114 22:50:47.135439  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.800033ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.137684  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.820867ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.137951  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0114 22:50:47.139016  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (812.418µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.143966  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.143991  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.144040  109816 httplog.go:90] GET /healthz: (952.319µs) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:47.146059  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.645638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.146340  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0114 22:50:47.147440  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (776.897µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.149694  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.793708ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.150017  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0114 22:50:47.151380  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.052885ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.153553  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.582962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.153746  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0114 22:50:47.158693  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (4.612518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.158760  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.158779  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.158813  109816 httplog.go:90] GET /healthz: (683.274µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.160738  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.500742ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.160916  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0114 22:50:47.162092  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (813.458µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.164325  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.868981ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.164513  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0114 22:50:47.165719  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.041686ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.168151  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.101127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.168449  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0114 22:50:47.169917  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.263229ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.174128  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.680831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.174350  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0114 22:50:47.177646  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (3.085645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.179777  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.733012ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.180270  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0114 22:50:47.181803  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.24022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.184334  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.800621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.184815  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0114 22:50:47.185809  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (827.461µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.188017  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.831804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.188771  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0114 22:50:47.190153  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (864.442µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.192828  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.184555ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.193338  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0114 22:50:47.194447  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (876.895µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.196743  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.884507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.197292  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0114 22:50:47.201247  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (3.727578ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.204163  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.171224ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.204408  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0114 22:50:47.205817  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.197108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.207828  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.67553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.208002  109816 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0114 22:50:47.209278  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.091406ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.211595  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.785338ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.211746  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0114 22:50:47.222389  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.557089ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.240400  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.483876ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.240829  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0114 22:50:47.242568  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.242597  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.242635  109816 httplog.go:90] GET /healthz: (1.185506ms) 0 [Go-http-client/1.1 127.0.0.1:55556]
I0114 22:50:47.259299  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.259349  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.259372  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.360161ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.259383  109816 httplog.go:90] GET /healthz: (1.331537ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.281573  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.317226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.281958  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0114 22:50:47.299190  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.34081ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.320071  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.178337ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.320344  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0114 22:50:47.339313  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.286015ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.349177  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.349210  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.349248  109816 httplog.go:90] GET /healthz: (7.794812ms) 0 [Go-http-client/1.1 127.0.0.1:55556]
I0114 22:50:47.359748  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.359780  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.359816  109816 httplog.go:90] GET /healthz: (1.707795ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.360009  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.055646ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.360370  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0114 22:50:47.385454  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (7.566341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.401581  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.576374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.401850  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0114 22:50:47.420176  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.192692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.440198  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.310581ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.440443  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0114 22:50:47.442435  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.442459  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.442501  109816 httplog.go:90] GET /healthz: (1.034617ms) 0 [Go-http-client/1.1 127.0.0.1:55556]
I0114 22:50:47.461787  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.461833  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.461880  109816 httplog.go:90] GET /healthz: (3.773163ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.462374  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (4.503104ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.480036  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.115719ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.480297  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0114 22:50:47.499351  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.255945ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.528756  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.497803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.528995  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0114 22:50:47.539021  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.169225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.542607  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.542638  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.542678  109816 httplog.go:90] GET /healthz: (1.262313ms) 0 [Go-http-client/1.1 127.0.0.1:55556]
I0114 22:50:47.560206  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.345313ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.560457  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0114 22:50:47.560497  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.560514  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.560554  109816 httplog.go:90] GET /healthz: (1.972232ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.579636  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.80062ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.599762  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.796019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.600003  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0114 22:50:47.618889  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.061784ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.640635  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.49034ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.640919  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0114 22:50:47.643477  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.643497  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.643532  109816 httplog.go:90] GET /healthz: (2.050408ms) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:47.659696  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.659729  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.659761  109816 httplog.go:90] GET /healthz: (1.177576ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.660177  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.28333ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.679822  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.842357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.680092  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0114 22:50:47.698928  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.048082ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.722404  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.067654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.722678  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0114 22:50:47.739067  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.199148ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.742312  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.742348  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.742385  109816 httplog.go:90] GET /healthz: (912.233µs) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:47.758910  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.758939  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.758986  109816 httplog.go:90] GET /healthz: (879.821µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.759626  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.807363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.759961  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0114 22:50:47.780730  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (2.656004ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.807225  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (9.339567ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.807480  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0114 22:50:47.819380  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.49255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.843732  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.843764  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.843804  109816 httplog.go:90] GET /healthz: (1.50516ms) 0 [Go-http-client/1.1 127.0.0.1:55556]
I0114 22:50:47.844249  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.958634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.844483  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0114 22:50:47.859063  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.175503ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.859219  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.859249  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.859278  109816 httplog.go:90] GET /healthz: (801.048µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.879744  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.86492ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.879999  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0114 22:50:47.899264  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.301666ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.923374  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.359108ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.923628  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0114 22:50:47.939607  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.695546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.942368  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.942399  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.942431  109816 httplog.go:90] GET /healthz: (908.207µs) 0 [Go-http-client/1.1 127.0.0.1:55556]
I0114 22:50:47.959202  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:47.959231  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:47.959279  109816 httplog.go:90] GET /healthz: (1.16313ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:47.959842  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.96057ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.960114  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0114 22:50:47.979138  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.205914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.999649  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.728296ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:47.999895  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0114 22:50:48.019039  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.177667ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:48.041514  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.876424ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:48.041843  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0114 22:50:48.045981  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.046033  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.046078  109816 httplog.go:90] GET /healthz: (4.276627ms) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:48.059054  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.059100  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.199347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.059102  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.059174  109816 httplog.go:90] GET /healthz: (1.055393ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:48.080012  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.072203ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:48.080265  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0114 22:50:48.224548  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.224572  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (126.534981ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:48.224589  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.224630  109816 httplog.go:90] GET /healthz: (81.147523ms) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:48.224789  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.224804  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.224833  109816 httplog.go:90] GET /healthz: (66.350859ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.226835  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.549905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55556]
I0114 22:50:48.227078  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0114 22:50:48.228249  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (959.883µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.230382  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.645153ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.230616  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0114 22:50:48.231704  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (896.013µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.234259  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.197743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.234440  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0114 22:50:48.235850  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.099625ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.241329  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.820685ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.241505  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0114 22:50:48.243181  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.243208  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.243252  109816 httplog.go:90] GET /healthz: (1.832858ms) 0 [Go-http-client/1.1 127.0.0.1:55992]
I0114 22:50:48.259106  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.259135  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.259172  109816 httplog.go:90] GET /healthz: (971.006µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.259186  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.276ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.279926  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.028174ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.280297  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0114 22:50:48.299275  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.324914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.320510  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.478061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.320739  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0114 22:50:48.338975  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.109464ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.342334  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.342359  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.342389  109816 httplog.go:90] GET /healthz: (944.908µs) 0 [Go-http-client/1.1 127.0.0.1:55992]
I0114 22:50:48.359107  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.359144  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.359189  109816 httplog.go:90] GET /healthz: (1.093925ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.359866  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.98357ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.360170  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0114 22:50:48.379151  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.264556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.399810  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.930437ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.400067  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0114 22:50:48.419373  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.394854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.440051  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.173988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.440281  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0114 22:50:48.442253  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.442282  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.442318  109816 httplog.go:90] GET /healthz: (917.348µs) 0 [Go-http-client/1.1 127.0.0.1:55992]
I0114 22:50:48.459048  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.169173ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.459940  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.459970  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.460030  109816 httplog.go:90] GET /healthz: (1.043242ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.479972  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.051657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.480259  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0114 22:50:48.499397  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.294551ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.520030  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.115331ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.520487  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0114 22:50:48.539307  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.423277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.542583  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.542612  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.542646  109816 httplog.go:90] GET /healthz: (1.243625ms) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:48.560532  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.223756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.560753  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0114 22:50:48.562085  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.562109  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.562140  109816 httplog.go:90] GET /healthz: (1.75576ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.579574  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.345997ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.600171  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.251508ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.600449  109816 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0114 22:50:48.627656  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (9.659632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.629591  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.525205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.777228  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (139.405982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.777536  109816 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0114 22:50:48.788324  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.788360  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.788404  109816 httplog.go:90] GET /healthz: (130.050567ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.788741  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (10.890993ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.793623  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.793646  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.793680  109816 httplog.go:90] GET /healthz: (152.219072ms) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:48.793841  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.341429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55992]
I0114 22:50:48.796133  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.83814ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.796511  109816 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0114 22:50:48.798118  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (997.649µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.799457  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (958.847µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.801627  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.864836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.801840  109816 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0114 22:50:48.803716  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.695364ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.806500  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.427871ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.810627  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.653676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.811001  109816 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0114 22:50:48.812513  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.302511ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.814123  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.224811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.820759  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (6.184071ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.821364  109816 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0114 22:50:48.822579  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (847.226µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.824241  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.349419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.842521  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.939047ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.842624  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.842645  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.842672  109816 httplog.go:90] GET /healthz: (1.215514ms) 0 [Go-http-client/1.1 127.0.0.1:56058]
I0114 22:50:48.842737  109816 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0114 22:50:48.859199  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.359604ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.860029  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.860053  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.860085  109816 httplog.go:90] GET /healthz: (1.759233ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.861161  109816 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.193371ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.880091  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.210244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.880558  109816 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0114 22:50:48.898989  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.136792ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.900513  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.082214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.919931  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.948994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.920186  109816 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0114 22:50:48.939123  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.257151ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.940686  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.120634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.942561  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.942588  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.942617  109816 httplog.go:90] GET /healthz: (1.039997ms) 0 [Go-http-client/1.1 127.0.0.1:56058]
I0114 22:50:48.959030  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:48.959058  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:48.959103  109816 httplog.go:90] GET /healthz: (867.928µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:48.959876  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.964006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.960103  109816 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0114 22:50:48.979475  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.679407ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.981298  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.369571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:48.999842  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.944235ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:49.000146  109816 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0114 22:50:49.019033  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.097722ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:49.020816  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.335207ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:49.041255  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.349805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:49.041481  109816 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0114 22:50:49.042238  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:49.042268  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:49.042303  109816 httplog.go:90] GET /healthz: (867.747µs) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:49.059027  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.129865ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.060535  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:49.060559  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:49.060591  109816 httplog.go:90] GET /healthz: (2.050494ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:49.060646  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.219812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.080941  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.534541ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.081206  109816 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0114 22:50:49.099161  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.238ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.100822  109816 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.203774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.119996  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.057827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.120240  109816 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0114 22:50:49.139177  109816 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.267251ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.140945  109816 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.224799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.142369  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:49.142397  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:49.142427  109816 httplog.go:90] GET /healthz: (877.546µs) 0 [Go-http-client/1.1 127.0.0.1:55694]
I0114 22:50:49.160090  109816 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 22:50:49.160138  109816 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 22:50:49.160170  109816 httplog.go:90] GET /healthz: (1.168486ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.160838  109816 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.370448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:49.161130  109816 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0114 22:50:49.243982  109816 httplog.go:90] GET /healthz: (1.054404ms) 200 [Go-http-client/1.1 127.0.0.1:56058]
W0114 22:50:49.244654  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:50:49.244689  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:50:49.244716  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:50:49.244776  109816 factory.go:174] Creating scheduler from algorithm provider 'DefaultProvider'
W0114 22:50:49.244847  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:50:49.245045  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:50:49.245199  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:50:49.245249  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:50:49.245332  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:50:49.245617  109816 reflector.go:153] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.245637  109816 reflector.go:153] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.245658  109816 reflector.go:188] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.245785  109816 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.245806  109816 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246050  109816 reflector.go:153] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246074  109816 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246122  109816 reflector.go:153] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246142  109816 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246439  109816 reflector.go:153] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246466  109816 reflector.go:188] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246507  109816 reflector.go:153] Starting reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246522  109816 reflector.go:188] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246529  109816 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (451.668µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.245643  109816 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246867  109816 reflector.go:153] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246893  109816 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I0114 22:50:49.246968  109816 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (330.362µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:49.247560  109816 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=28419 labels= fields= timeout=7m2s
I0114 22:50:49.247603  109816 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (295.468µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.247646  109816 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (338.046µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56124]
I0114 22:50:49.247680  109816 get.go:251] Starting watch for /api/v1/services, rv=28419 labels= fields= timeout=6m30s
I0114 22:50:49.247648  109816 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (299.553µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56126]
I0114 22:50:49.247930  109816 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (230.593µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56128]
I0114 22:50:49.248044  109816 httplog.go:90] GET /apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: (206.739µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56130]
I0114 22:50:49.248141  109816 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (412.527µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56132]
I0114 22:50:49.248159  109816 get.go:251] Starting watch for /api/v1/pods, rv=28419 labels= fields= timeout=9m35s
I0114 22:50:49.248361  109816 get.go:251] Starting watch for /api/v1/nodes, rv=28419 labels= fields= timeout=7m57s
I0114 22:50:49.248413  109816 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=28419 labels= fields= timeout=5m49s
I0114 22:50:49.248460  109816 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=28424 labels= fields= timeout=7m41s
I0114 22:50:49.248614  109816 get.go:251] Starting watch for /apis/storage.k8s.io/v1/csinodes, rv=28426 labels= fields= timeout=7m46s
I0114 22:50:49.248759  109816 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=28426 labels= fields= timeout=9m31s
I0114 22:50:49.259494  109816 httplog.go:90] GET /healthz: (1.036299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.261363  109816 httplog.go:90] GET /api/v1/namespaces/default: (1.256225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.263354  109816 httplog.go:90] POST /api/v1/namespaces: (1.663187ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.264595  109816 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (886.876µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.270726  109816 httplog.go:90] POST /api/v1/namespaces/default/services: (5.74184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.271924  109816 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (870.06µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.274536  109816 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.229647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.345563  109816 shared_informer.go:236] caches populated
I0114 22:50:49.345604  109816 shared_informer.go:236] caches populated
I0114 22:50:49.345611  109816 shared_informer.go:236] caches populated
I0114 22:50:49.345616  109816 shared_informer.go:236] caches populated
I0114 22:50:49.345621  109816 shared_informer.go:236] caches populated
I0114 22:50:49.345627  109816 shared_informer.go:236] caches populated
I0114 22:50:49.345632  109816 shared_informer.go:236] caches populated
I0114 22:50:49.345638  109816 shared_informer.go:236] caches populated
I0114 22:50:49.345947  109816 shared_informer.go:236] caches populated
I0114 22:50:49.348414  109816 httplog.go:90] POST /api/v1/nodes: (2.306802ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.350179  109816 node_tree.go:86] Added node "test-node-0" in group "" to NodeTree
I0114 22:50:49.486291  109816 node_tree.go:86] Added node "test-node-1" in group "" to NodeTree
I0114 22:50:49.489674  109816 httplog.go:90] POST /api/v1/nodes: (140.741487ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.514150  109816 httplog.go:90] POST /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods: (23.934505ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.514223  109816 scheduling_queue.go:839] About to try and schedule pod postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod
I0114 22:50:49.514241  109816 scheduler.go:562] Attempting to schedule pod: postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod
W0114 22:50:49.514372  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:50:49.514404  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 22:50:49.514415  109816 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:50:49.514544  109816 scheduler_binder.go:278] AssumePodVolumes for pod "postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod", node "test-node-0"
I0114 22:50:49.514566  109816 scheduler_binder.go:288] AssumePodVolumes for pod "postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0114 22:50:49.514644  109816 factory.go:488] Attempting to bind test-pod to test-node-0
I0114 22:50:49.716789  109816 httplog.go:90] POST /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod/binding: (201.82565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.716886  109816 httplog.go:90] GET /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod: (101.697836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.717509  109816 scheduler.go:704] pod postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod is bound successfully on node "test-node-0", 2 nodes evaluated, 2 nodes were found feasible.
I0114 22:50:49.720931  109816 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/events: (2.978215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.726258  109816 httplog.go:90] DELETE /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod: (8.627922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.734343  109816 httplog.go:90] GET /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod: (2.705278ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.737960  109816 httplog.go:90] POST /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods: (2.626239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.738284  109816 scheduling_queue.go:839] About to try and schedule pod postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod
I0114 22:50:49.738305  109816 scheduler.go:562] Attempting to schedule pod: postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod
I0114 22:50:49.738556  109816 scheduler_binder.go:278] AssumePodVolumes for pod "postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod", node "test-node-1"
I0114 22:50:49.738581  109816 scheduler_binder.go:288] AssumePodVolumes for pod "postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod", node "test-node-1": all PVCs bound and nothing to do
E0114 22:50:49.738639  109816 framework.go:614] error while running "prebind-plugin" prebind plugin for pod "test-pod": injecting failure for pod test-pod
E0114 22:50:49.738661  109816 factory.go:438] Error scheduling postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod: error while running "prebind-plugin" prebind plugin for pod "test-pod": injecting failure for pod test-pod; retrying
I0114 22:50:49.738696  109816 scheduler.go:741] Updating pod condition for postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod to (PodScheduled==False, Reason=SchedulerError)
I0114 22:50:49.741534  109816 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/events: (1.818791ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56208]
I0114 22:50:49.742191  109816 httplog.go:90] GET /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod: (2.632139ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.742539  109816 httplog.go:90] PUT /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod/status: (3.557768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56146]
I0114 22:50:49.750557  109816 httplog.go:90] GET /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod: (2.013977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.754055  109816 scheduling_queue.go:839] About to try and schedule pod postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod
I0114 22:50:49.754097  109816 scheduler.go:722] Skip schedule deleting pod: postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod
I0114 22:50:49.757132  109816 httplog.go:90] DELETE /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod: (5.887031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.758023  109816 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/events: (3.336468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56208]
I0114 22:50:49.763443  109816 httplog.go:90] GET /api/v1/namespaces/postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/pods/test-pod: (4.567838ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.763982  109816 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=28419&timeout=6m30s&timeoutSeconds=390&watch=true: (516.45581ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56134]
I0114 22:50:49.763997  109816 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=28426&timeout=9m31s&timeoutSeconds=571&watch=true: (515.356287ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56138]
I0114 22:50:49.764002  109816 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=28419&timeout=7m57s&timeoutSeconds=477&watch=true: (515.797228ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56136]
I0114 22:50:49.764014  109816 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=28419&timeout=7m2s&timeoutSeconds=422&watch=true: (516.607886ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56058]
I0114 22:50:49.764146  109816 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=28419&timeout=5m49s&timeoutSeconds=349&watch=true: (515.999444ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55694]
I0114 22:50:49.764158  109816 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=28424&timeout=7m41s&timeoutSeconds=461&watch=true: (515.80015ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56130]
I0114 22:50:49.764152  109816 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=28419&timeout=9m35s&timeoutSeconds=575&watch=true: (516.138447ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56126]
I0114 22:50:49.764171  109816 httplog.go:90] GET /apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=28426&timeout=7m46s&timeoutSeconds=466&watch=true: (515.670515ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56128]
I0114 22:50:49.772690  109816 httplog.go:90] DELETE /api/v1/nodes: (8.56018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.772911  109816 controller.go:180] Shutting down kubernetes service endpoint reconciler
I0114 22:50:49.774578  109816 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.22367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.776572  109816 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.555079ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:56188]
I0114 22:50:49.776838  109816 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0114 22:50:49.776997  109816 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=28419&timeout=9m37s&timeoutSeconds=577&watch=true: (3.837426411s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:55558]
--- FAIL: TestPostBindPlugin (4.13s)
    framework_test.go:1081: test #0: Expected the postbind plugin to be called, was called 0 times.
    framework_test.go:1074: test #1: Didn't expected the postbind plugin to be called 1 times.

				from junit_20200114-224403.xml

Find postbind-plugin730f9ffe-aee3-44a0-855a-04161872db53/test-pod mentions in log files | View test history on testgrid


Show 2610 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 56 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0114 22:32:25] Call tree:
!!! [0114 22:32:25]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0114 22:32:25]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0114 22:32:25]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0114 22:32:25]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0114 22:32:25]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0114 22:32:25] Running kubeadm tests
+++ [0114 22:32:32] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0114 22:33:26] Running tests without code coverage
{"Time":"2020-01-14T22:35:21.459843497Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t66.398s\n"}
✓  cmd/kubeadm/test/cmd (1m6.398s)
... skipping 302 lines ...
+++ [0114 22:37:26] Building kube-controller-manager
+++ [0114 22:37:33] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0114 22:38:10] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0114 22:38:12.024324   54929 serving.go:313] Generated self-signed cert in-memory
W0114 22:38:13.456812   54929 authentication.go:409] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0114 22:38:13.456863   54929 authentication.go:267] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0114 22:38:13.456876   54929 authentication.go:291] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0114 22:38:13.456896   54929 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0114 22:38:13.456925   54929 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0114 22:38:13.456958   54929 controllermanager.go:161] Version: v1.18.0-alpha.1.684+532eb28eb74abc
I0114 22:38:13.458447   54929 secure_serving.go:178] Serving securely on [::]:10257
I0114 22:38:13.458786   54929 tlsconfig.go:241] Starting DynamicServingCertificateController
I0114 22:38:13.458950   54929 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0114 22:38:13.459017   54929 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 20 lines ...
I0114 22:38:13.737208   54929 pv_controller_base.go:294] Starting persistent volume controller
I0114 22:38:13.737810   54929 shared_informer.go:206] Waiting for caches to sync for persistent volume
I0114 22:38:13.738055   54929 controllermanager.go:533] Started "serviceaccount"
I0114 22:38:13.738076   54929 core.go:241] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0114 22:38:13.738085   54929 controllermanager.go:525] Skipping "route"
W0114 22:38:13.738488   54929 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
E0114 22:38:13.738517   54929 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0114 22:38:13.738527   54929 controllermanager.go:525] Skipping "service"
I0114 22:38:13.738958   54929 controllermanager.go:533] Started "persistentvolume-expander"
W0114 22:38:13.739572   54929 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:38:13.739619   54929 controllermanager.go:533] Started "horizontalpodautoscaling"
W0114 22:38:13.739631   54929 controllermanager.go:525] Skipping "nodeipam"
W0114 22:38:13.739917   54929 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
... skipping 37 lines ...
I0114 22:38:13.744920   54929 controllermanager.go:533] Started "statefulset"
I0114 22:38:13.745209   54929 stateful_set.go:145] Starting stateful set controller
I0114 22:38:13.745222   54929 shared_informer.go:206] Waiting for caches to sync for stateful set
I0114 22:38:13.745239   54929 controllermanager.go:533] Started "cronjob"
I0114 22:38:13.745350   54929 cronjob_controller.go:97] Starting CronJob Manager
I0114 22:38:13.745468   54929 node_lifecycle_controller.go:77] Sending events to api server
E0114 22:38:13.745496   54929 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0114 22:38:13.745506   54929 controllermanager.go:525] Skipping "cloud-node-lifecycle"
W0114 22:38:13.745516   54929 controllermanager.go:525] Skipping "root-ca-cert-publisher"
W0114 22:38:13.745882   54929 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:38:13.745904   54929 controllermanager.go:533] Started "endpoint"
I0114 22:38:13.745922   54929 endpoints_controller.go:181] Starting endpoint controller
I0114 22:38:13.745935   54929 shared_informer.go:206] Waiting for caches to sync for endpoint
... skipping 82 lines ...
I0114 22:38:14.510183   54929 cleaner.go:81] Starting CSR cleaner controller
I0114 22:38:14.510310   54929 pvc_protection_controller.go:100] Starting PVC protection controller
I0114 22:38:14.510330   54929 shared_informer.go:206] Waiting for caches to sync for PVC protection
I0114 22:38:14.537442   54929 controllermanager.go:533] Started "namespace"
I0114 22:38:14.539113   54929 namespace_controller.go:200] Starting namespace controller
I0114 22:38:14.539136   54929 shared_informer.go:206] Waiting for caches to sync for namespace
W0114 22:38:14.586842   54929 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0114 22:38:14.604692   54929 shared_informer.go:213] Caches are synced for taint 
I0114 22:38:14.604799   54929 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: 
I0114 22:38:14.604877   54929 taint_manager.go:186] Starting NoExecuteTaintManager
I0114 22:38:14.605168   54929 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"080962cc-592f-4198-bc6c-89bd80776304", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
I0114 22:38:14.605310   54929 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0114 22:38:14.610739   54929 shared_informer.go:213] Caches are synced for PVC protection 
... skipping 6 lines ...
I0114 22:38:14.642751   54929 shared_informer.go:213] Caches are synced for ClusterRoleAggregator 
I0114 22:38:14.643309   54929 shared_informer.go:213] Caches are synced for ReplicaSet 
I0114 22:38:14.643904   54929 shared_informer.go:213] Caches are synced for daemon sets 
I0114 22:38:14.643910   54929 shared_informer.go:213] Caches are synced for PV protection 
I0114 22:38:14.644348   54929 shared_informer.go:213] Caches are synced for GC 
I0114 22:38:14.647079   54929 shared_informer.go:213] Caches are synced for endpoint 
E0114 22:38:14.653902   54929 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0114 22:38:14.654652   54929 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0114 22:38:14.658888   54929 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
E0114 22:38:14.676669   54929 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0114 22:38:14.738018   54929 shared_informer.go:213] Caches are synced for persistent volume 
I0114 22:38:14.740773   54929 shared_informer.go:213] Caches are synced for expand 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   49s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests
... skipping 85 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0114 22:38:19] Creating namespace namespace-1579041499-19372
namespace/namespace-1579041499-19372 created
Context "test" modified.
+++ [0114 22:38:20] Testing RESTMapper
+++ [0114 22:38:20] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 601 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 12 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 188 lines ...
(Bpod/valid-pod patched
core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0114 22:39:16] "kubectl patch with resourceVersion 547" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0114 22:39:17.488337   54929 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test replaced
core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(Bnode "node-v1-test" deleted
core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
(BEdit cancelled, no changes made.
... skipping 22 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 85 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0114 22:39:32] Creating namespace namespace-1579041572-14708
namespace/namespace-1579041572-14708 created
Context "test" modified.
+++ [0114 22:39:33] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0114 22:39:33] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0114 22:39:37.691267   51462 client.go:361] parsed scheme: "endpoint"
I0114 22:39:37.691625   51462 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:39:37.696075   51462 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 102 lines ...
Context "test" modified.
+++ [0114 22:39:41] Testing kubectl create filter
create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 30 lines ...
I0114 22:39:46.625487   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041582-727", Name:"nginx-8484dd655", UID:"0e4c5187-1a61-401a-88eb-a7f8901956ed", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-jmtx5
I0114 22:39:46.628993   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041582-727", Name:"nginx-8484dd655", UID:"0e4c5187-1a61-401a-88eb-a7f8901956ed", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-vkgb2
I0114 22:39:46.629458   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041582-727", Name:"nginx-8484dd655", UID:"0e4c5187-1a61-401a-88eb-a7f8901956ed", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-v2fnj
apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
(BI0114 22:39:46.816444   54929 horizontal.go:353] Horizontal Pod Autoscaler frontend has been deleted in namespace-1579041568-261
Successful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1579041582-727\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1579041582-727"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0114 22:39:56.445161   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041582-727", Name:"nginx", UID:"8d89e2e5-8bae-4070-bd7e-a0e6ba33cda2", APIVersion:"apps/v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I0114 22:39:56.480690   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041582-727", Name:"nginx-668b6c7744", UID:"fa79d195-aaae-46b4-a42b-dd36717f0ea6", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-mdf47
I0114 22:39:56.483494   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041582-727", Name:"nginx-668b6c7744", UID:"fa79d195-aaae-46b4-a42b-dd36717f0ea6", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-n2wqm
I0114 22:39:56.486619   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041582-727", Name:"nginx-668b6c7744", UID:"fa79d195-aaae-46b4-a42b-dd36717f0ea6", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-sq7tw
Successful
... skipping 141 lines ...
+++ [0114 22:40:05] Creating namespace namespace-1579041605-16625
namespace/namespace-1579041605-16625 created
Context "test" modified.
+++ [0114 22:40:05] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1579041605-16625 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1579041605-16625 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0114 22:40:08.150424   65121 loader.go:375] Config loaded from file:  /tmp/tmp.iBCCS5CxAm/.kube/config
I0114 22:40:08.152477   65121 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0114 22:40:08.194839   65121 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0114 22:40:08.196743   65121 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 479 lines ...
Successful
message:NAME    DATA   AGE
one     0      1s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [0114 22:40:15] Creating namespace namespace-1579041615-10390
namespace/namespace-1579041615-10390 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-01-14T22:40:16Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1579041615-10390", "resourceVersion":"780", "selfLink":"/api/v1/namespaces/namespace-1579041615-10390/pods/valid-pod", "uid":"d3d1f935-acf2-4fb3-8004-a1c0539cfbcf"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-01-14T22:40:16Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1579041615-10390","resourceVersion":"780","selfLink":"/api/v1/namespaces/namespace-1579041615-10390/pods/valid-pod","uid":"d3d1f935-acf2-4fb3-8004-a1c0539cfbcf"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-01-14T22:40:16Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1579041615-10390 resourceVersion:780 selfLink:/api/v1/namespaces/namespace-1579041615-10390/pods/valid-pod uid:d3d1f935-acf2-4fb3-8004-a1c0539cfbcf] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 45 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 42 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [0114 22:40:22] Creating namespace namespace-1579041622-32359
namespace/namespace-1579041622-32359 created
Context "test" modified.
+++ [0114 22:40:22] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [0114 22:40:23] Creating namespace namespace-1579041623-12285
namespace/namespace-1579041623-12285 created
Context "test" modified.
+++ [0114 22:40:23] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0114 22:40:24.790153   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041623-12285", Name:"frontend", UID:"007eb551-b4f5-45b2-a5bb-76abdc4f1074", APIVersion:"apps/v1", ResourceVersion:"839", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mxl9s
I0114 22:40:24.792390   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041623-12285", Name:"frontend", UID:"007eb551-b4f5-45b2-a5bb-76abdc4f1074", APIVersion:"apps/v1", ResourceVersion:"839", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zg9sv
I0114 22:40:24.794377   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041623-12285", Name:"frontend", UID:"007eb551-b4f5-45b2-a5bb-76abdc4f1074", APIVersion:"apps/v1", ResourceVersion:"839", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-q8s2h
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-mxl9s does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-mxl9s does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"a7675cc7-0d60-4285-a247-236d48c2ad2a","resourceVersion":"860","creationTimestamp":"2020-01-14T22:40:26Z"}}
... skipping 2 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"a7675cc7-0d60-4285-a247-236d48c2ad2a","resourceVersion":"861","creationTimestamp":"2020-01-14T22:40:26Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"a7675cc7-0d60-4285-a247-236d48c2ad2a"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0114 22:40:39] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 194 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
Recording: run_cmd_with_img_tests
... skipping 11 lines ...
I0114 22:41:02.135377   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041661-24290", Name:"test1-6cdffdb5b8", UID:"44e48c65-159a-4387-82bd-133cf470a3c7", APIVersion:"apps/v1", ResourceVersion:"1033", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-zfnjs
Successful
message:deployment.apps/test1 created
has:deployment.apps/test1 created
deployment.apps "test1" deleted
W0114 22:41:02.260561   51462 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0114 22:41:02.262122   54929 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0114 22:41:02.385386   51462 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0114 22:41:02.386990   54929 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
+++ [0114 22:41:02] Testing recursive resources
+++ [0114 22:41:02] Creating namespace namespace-1579041662-13844
W0114 22:41:02.533533   51462 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0114 22:41:02.534898   54929 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579041662-13844 created
Context "test" modified.
W0114 22:41:02.667069   51462 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0114 22:41:02.668162   54929 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
E0114 22:41:03.263279   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:03.388199   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:03.536248   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:41:03.669442   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:04.264620   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Name:         busybox0
Namespace:    namespace-1579041662-13844
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 153 lines ...
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:41:04.389440   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:04.537330   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:41:04.670491   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:41:05.266475   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:05.390665   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx created
I0114 22:41:05.435264   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041662-13844", Name:"nginx", UID:"b680ad37-7630-4ad0-a3df-2a3473eaf2ea", APIVersion:"apps/v1", ResourceVersion:"1058", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I0114 22:41:05.441362   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041662-13844", Name:"nginx-f87d999f7", UID:"362d4251-7e52-486a-9acc-789d27e52a95", APIVersion:"apps/v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-dh6ks
I0114 22:41:05.443274   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041662-13844", Name:"nginx-f87d999f7", UID:"362d4251-7e52-486a-9acc-789d27e52a95", APIVersion:"apps/v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-snchw
I0114 22:41:05.445305   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041662-13844", Name:"nginx-f87d999f7", UID:"362d4251-7e52-486a-9acc-789d27e52a95", APIVersion:"apps/v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-g5xhg
E0114 22:41:05.538371   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE0114 22:41:05.671659   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bkubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
generic-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
(BSuccessful
message:apiVersion: extensions/v1beta1
... skipping 38 lines ...
      terminationGracePeriodSeconds: 30
status: {}
has:extensions/v1beta1
I0114 22:41:05.932676   54929 namespace_controller.go:185] Namespace has been deleted non-native-resources
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:06.267539   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:41:06.391994   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:41:06.539542   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:06.672769   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:41:07.268791   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:07.393277   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:41:07.540768   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:41:07.674160   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 22:41:07.829494   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041662-13844", Name:"busybox0", UID:"81e634d0-a461-4b96-91b6-2c5be113cf21", APIVersion:"v1", ResourceVersion:"1091", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-s7694
I0114 22:41:07.832665   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041662-13844", Name:"busybox1", UID:"2ddabb86-45f8-485d-ac05-aecd199acadc", APIVersion:"v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-62mfs
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE0114 22:41:08.270168   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:08.394416   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BE0114 22:41:08.541947   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
E0114 22:41:08.675062   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BE0114 22:41:09.271292   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0114 22:41:09.395560   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:09.542941   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(BE0114 22:41:09.676038   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0114 22:41:09.774081   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041662-13844", Name:"busybox0", UID:"81e634d0-a461-4b96-91b6-2c5be113cf21", APIVersion:"v1", ResourceVersion:"1113", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-gfclz
I0114 22:41:09.783518   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041662-13844", Name:"busybox1", UID:"2ddabb86-45f8-485d-ac05-aecd199acadc", APIVersion:"v1", ResourceVersion:"1118", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qxgp7
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:10.272218   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:41:10.396659   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:10.544047   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx1-deployment created
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 22:41:10.567386   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041662-13844", Name:"nginx1-deployment", UID:"a8eb0e08-1cb9-44b6-8f08-53c78234ad65", APIVersion:"apps/v1", ResourceVersion:"1133", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
I0114 22:41:10.570616   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041662-13844", Name:"nginx1-deployment-7bdbbfb5cf", UID:"75d87808-37ca-418d-a528-c9ad92599690", APIVersion:"apps/v1", ResourceVersion:"1135", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-25n7f
I0114 22:41:10.571298   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041662-13844", Name:"nginx0-deployment", UID:"2b962f60-0360-4850-b409-fe4ff2d3b243", APIVersion:"apps/v1", ResourceVersion:"1134", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
I0114 22:41:10.576522   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041662-13844", Name:"nginx1-deployment-7bdbbfb5cf", UID:"75d87808-37ca-418d-a528-c9ad92599690", APIVersion:"apps/v1", ResourceVersion:"1135", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-jf9sr
I0114 22:41:10.577304   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041662-13844", Name:"nginx0-deployment-57c6bff7f6", UID:"8d102bb7-1960-42da-9c3a-63a009dd3115", APIVersion:"apps/v1", ResourceVersion:"1138", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-xxw7t
I0114 22:41:10.581401   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041662-13844", Name:"nginx0-deployment-57c6bff7f6", UID:"8d102bb7-1960-42da-9c3a-63a009dd3115", APIVersion:"apps/v1", ResourceVersion:"1138", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-bvncg
E0114 22:41:10.676995   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
E0114 22:41:11.273235   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0114 22:41:11.397781   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx1-deployment resumed
deployment.apps/nginx0-deployment resumed
generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0114 22:41:11.545362   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0114 22:41:11.678202   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
E0114 22:41:12.274376   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:12.398892   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:12.546426   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:12.679351   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0114 22:41:13.104272   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041662-13844", Name:"busybox0", UID:"a62f7d2d-3b9d-4009-aa54-8b5e45d0dac2", APIVersion:"v1", ResourceVersion:"1183", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7r8wt
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 22:41:13.109734   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041662-13844", Name:"busybox1", UID:"e7a73eec-daa2-4034-98ac-0c5772a11262", APIVersion:"v1", ResourceVersion:"1185", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-c95mh
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:41:13.275392   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:no rollbacker has been implemented for "ReplicationController"
Successful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0114 22:41:13.400093   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
E0114 22:41:13.547610   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
E0114 22:41:13.680405   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
E0114 22:41:14.276384   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:14.401280   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:14.548746   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:14.681602   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0114 22:41:14] Testing kubectl(v1:namespaces)
namespace/my-namespace created
core.sh:1314: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
E0114 22:41:15.277519   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:15.402601   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:15.549924   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:15.682750   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:16.278960   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:16.403602   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:16.551094   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:16.683800   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:17.279819   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:17.404640   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:17.465460   54929 shared_informer.go:206] Waiting for caches to sync for resource quota
I0114 22:41:17.465512   54929 shared_informer.go:213] Caches are synced for resource quota 
E0114 22:41:17.552215   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:17.684976   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:17.967321   54929 shared_informer.go:206] Waiting for caches to sync for garbage collector
I0114 22:41:17.967560   54929 shared_informer.go:213] Caches are synced for garbage collector 
E0114 22:41:18.280963   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:18.405966   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:18.553571   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:18.686753   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:19.282222   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:19.407727   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:19.555061   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:19.688432   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
E0114 22:41:20.283725   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
E0114 22:41:20.409648   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace created
E0114 22:41:20.556908   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1323: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BE0114 22:41:20.690154   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1579041496-31428" deleted
namespace "namespace-1579041499-19372" deleted
... skipping 26 lines ...
namespace "namespace-1579041628-2182" deleted
namespace "namespace-1579041629-25538" deleted
namespace "namespace-1579041632-18464" deleted
namespace "namespace-1579041634-15209" deleted
namespace "namespace-1579041661-24290" deleted
namespace "namespace-1579041662-13844" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1579041496-31428" deleted
... skipping 27 lines ...
namespace "namespace-1579041628-2182" deleted
namespace "namespace-1579041629-25538" deleted
namespace "namespace-1579041632-18464" deleted
namespace "namespace-1579041634-15209" deleted
namespace "namespace-1579041661-24290" deleted
namespace "namespace-1579041662-13844" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
core.sh:1335: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(BE0114 22:41:21.285389   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/other created
E0114 22:41:21.411341   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1339: Successful get namespaces/other {{.metadata.name}}: other
(BE0114 22:41:21.558794   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1343: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:41:21.691290   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/valid-pod created
core.sh:1347: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1349: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
E0114 22:41:22.286604   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1356: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BE0114 22:41:22.412539   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
E0114 22:41:22.560608   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1360: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
E0114 22:41:22.692418   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:23.287646   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:23.372188   54929 horizontal.go:353] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1579041662-13844
I0114 22:41:23.376062   54929 horizontal.go:353] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1579041662-13844
E0114 22:41:23.413722   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:23.561846   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:23.693648   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:24.288796   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:24.414824   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:24.562928   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:24.694632   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:25.289995   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:25.415952   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:25.563871   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:25.695761   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:26.290836   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:26.418348   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:26.564627   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:26.696758   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:27.292021   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:27.419418   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:27.565910   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:27.697595   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_secrets_test
Running command: run_secrets_test

+++ Running case: test-cmd.run_secrets_test 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 37 lines ...
kind: Secret
metadata:
  creationTimestamp: null
  name: test
has not:example.com
core.sh:725: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
(BE0114 22:41:28.293249   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/test-secrets created
E0114 22:41:28.420544   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
(BE0114 22:41:28.567043   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
E0114 22:41:28.698646   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
(Bsecret "test-secret" deleted
core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
E0114 22:41:29.294419   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(BE0114 22:41:29.421690   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(BE0114 22:41:29.568071   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E0114 22:41:29.699670   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
secret/test-secret created
I0114 22:41:30.258583   54929 namespace_controller.go:185] Namespace has been deleted my-namespace
core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(BE0114 22:41:30.295621   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(BE0114 22:41:30.422821   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E0114 22:41:30.569383   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/secret-string-data created
E0114 22:41:30.700766   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(BI0114 22:41:31.019070   54929 namespace_controller.go:185] Namespace has been deleted kube-node-lease
I0114 22:41:31.026443   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041499-19372
secret "secret-string-data" deleted
... skipping 8 lines ...
core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0114 22:41:31.245232   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041542-27330
I0114 22:41:31.258096   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041544-9376
I0114 22:41:31.270239   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041563-4015
I0114 22:41:31.284435   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041567-12932
I0114 22:41:31.284497   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041566-8497
E0114 22:41:31.296700   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:31.300193   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041572-14708
I0114 22:41:31.324569   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041573-17780
secret "test-secret" deleted
I0114 22:41:31.332451   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041568-261
I0114 22:41:31.336117   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041561-11971
I0114 22:41:31.385296   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041578-6672
E0114 22:41:31.423824   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-secrets" deleted
I0114 22:41:31.502870   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041581-17047
I0114 22:41:31.503620   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041604-8834
I0114 22:41:31.507929   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041602-4197
I0114 22:41:31.523386   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041605-16625
I0114 22:41:31.552371   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041615-10390
I0114 22:41:31.554742   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041622-32359
E0114 22:41:31.570478   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:31.574256   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041582-727
I0114 22:41:31.574290   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041628-14576
I0114 22:41:31.578929   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041623-12285
I0114 22:41:31.602737   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041628-2182
I0114 22:41:31.694422   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041629-25538
E0114 22:41:31.701880   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:31.718545   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041632-18464
I0114 22:41:31.729797   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041634-15209
I0114 22:41:31.730209   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041661-24290
I0114 22:41:31.809209   54929 namespace_controller.go:185] Namespace has been deleted namespace-1579041662-13844
E0114 22:41:32.298107   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:32.425382   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:32.571860   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:32.703153   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:32.757665   54929 namespace_controller.go:185] Namespace has been deleted other
E0114 22:41:33.299455   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:33.426719   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:33.573095   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:33.704302   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:34.300667   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:34.428113   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:34.575176   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:34.705629   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:35.302261   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:35.429690   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:35.576934   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:35.707061   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:36.303592   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:36.430590   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
E0114 22:41:36.578472   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_configmap_tests
+++ [0114 22:41:36] Creating namespace namespace-1579041696-27377
E0114 22:41:36.708383   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579041696-27377 created
Context "test" modified.
+++ [0114 22:41:36] Testing configmaps
configmap/test-configmap created
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(Bconfigmap "test-configmap" deleted
E0114 22:41:37.304791   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(BE0114 22:41:37.431942   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/test-configmaps created
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(BE0114 22:41:37.579616   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(BE0114 22:41:37.709552   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created
configmap/test-binary-configmap created
core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(BE0114 22:41:38.306137   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:38.433239   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
E0114 22:41:38.581201   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-configmaps" deleted
E0114 22:41:38.710682   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:39.307292   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:39.434354   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:39.582471   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:39.711805   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:40.308494   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:40.435488   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:40.583661   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:40.712885   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:41.309986   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:41.436691   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:41.520209   54929 namespace_controller.go:185] Namespace has been deleted test-secrets
E0114 22:41:41.585139   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:41.714504   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:42.311272   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:42.440505   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:42.586268   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:42.715654   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:43.312444   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:43.441643   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:43.587377   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:43.716442   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [0114 22:41:43] Creating namespace namespace-1579041703-7143
namespace/namespace-1579041703-7143 created
Context "test" modified.
+++ [0114 22:41:43] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
E0114 22:41:44.313919   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
E0114 22:41:44.442799   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
E0114 22:41:44.588476   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
E0114 22:41:44.717725   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 2 lines ...
namespace/namespace-1579041704-446 created
Context "test" modified.
+++ [0114 22:41:44] Testing service accounts
core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
(Bnamespace/test-service-accounts created
core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
(BE0114 22:41:45.315303   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
serviceaccount/test-service-account created
E0114 22:41:45.443780   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
(Bserviceaccount "test-service-account" deleted
E0114 22:41:45.589818   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-service-accounts" deleted
E0114 22:41:45.718772   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:46.316382   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:46.444821   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:46.591085   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:46.719972   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:47.317682   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:47.445923   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:47.592208   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:47.720828   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:48.318742   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:48.447028   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:48.593318   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:48.705800   54929 namespace_controller.go:185] Namespace has been deleted test-configmaps
E0114 22:41:48.722177   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:49.319834   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:49.447841   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:49.594463   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:49.723461   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:50.320909   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:50.449181   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:50.595490   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:50.724286   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_job_tests
Running command: run_job_tests

+++ Running case: test-cmd.run_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_job_tests
+++ [0114 22:41:50] Creating namespace namespace-1579041710-4904
namespace/namespace-1579041710-4904 created
Context "test" modified.
+++ [0114 22:41:51] Testing job
batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
(BE0114 22:41:51.325563   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/test-jobs created
E0114 22:41:51.450056   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
(Bkubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/pi created
E0114 22:41:51.597119   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
(BE0114 22:41:51.725532   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
NAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pi     59 23 31 2 *   False     0        <none>          0s
Name:                          pi
Namespace:                     test-jobs
Labels:                        run=pi
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  run=pi
... skipping 19 lines ...
Successful
message:job.batch/test-job
has:job.batch/test-job
batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
(BI0114 22:41:52.250654   54929 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"ce75cba6-f048-40bf-b654-2a97f5365a46", APIVersion:"batch/v1", ResourceVersion:"1527", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-jrrss
job.batch/test-job created
E0114 22:41:52.326945   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
(BE0114 22:41:52.451324   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
NAME       COMPLETIONS   DURATION   AGE
test-job   0/1           0s         0s
Name:           test-job
Namespace:      test-jobs
Selector:       controller-uid=ce75cba6-f048-40bf-b654-2a97f5365a46
Labels:         controller-uid=ce75cba6-f048-40bf-b654-2a97f5365a46
                job-name=test-job
                run=pi
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Tue, 14 Jan 2020 22:41:52 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=ce75cba6-f048-40bf-b654-2a97f5365a46
           job-name=test-job
           run=pi
  Containers:
   pi:
... skipping 12 lines ...
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-jrrss
E0114 22:41:52.598339   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
job.batch "test-job" deleted
E0114 22:41:52.726683   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
cronjob.batch "pi" deleted
namespace "test-jobs" deleted
E0114 22:41:53.328231   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:53.452372   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:53.601570   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:53.727797   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:54.329656   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:54.453840   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:54.602772   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:54.729182   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:55.330837   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:55.455028   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:55.603921   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:55.730257   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:55.746628   54929 namespace_controller.go:185] Namespace has been deleted test-service-accounts
E0114 22:41:56.332100   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:56.456125   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:56.605168   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:56.731334   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:57.333408   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:57.457416   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:57.606203   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:41:57.732475   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_create_job_tests
Running command: run_create_job_tests

+++ Running case: test-cmd.run_create_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_job_tests
+++ [0114 22:41:58] Creating namespace namespace-1579041718-31339
namespace/namespace-1579041718-31339 created
Context "test" modified.
E0114 22:41:58.334643   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:58.372402   54929 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1579041718-31339", Name:"test-job", UID:"b1766e55-fede-4a47-8851-01273edc4de6", APIVersion:"batch/v1", ResourceVersion:"1549", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-wdtnp
job.batch/test-job created
E0114 22:41:58.458608   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
(Bjob.batch "test-job" deleted
E0114 22:41:58.607141   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:58.650960   54929 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1579041718-31339", Name:"test-job-pi", UID:"7b88b63e-96a8-45e1-832d-d356d07dd0d3", APIVersion:"batch/v1", ResourceVersion:"1556", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-ck2hz
job.batch/test-job-pi created
E0114 22:41:58.733615   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
(Bjob.batch "test-job-pi" deleted
kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/test-pi created
I0114 22:41:59.015111   54929 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1579041718-31339", Name:"my-pi", UID:"55bbbcae-3cfe-44a4-ac4d-af00e6df1c46", APIVersion:"batch/v1", ResourceVersion:"1564", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-7rkz8
job.batch/my-pi created
Successful
message:[perl -Mbignum=bpi -wle print bpi(10)]
has:perl -Mbignum=bpi -wle print bpi(10)
job.batch "my-pi" deleted
cronjob.batch "test-pi" deleted
+++ exit code: 0
E0114 22:41:59.335842   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_pod_templates_tests
Running command: run_pod_templates_tests

+++ Running case: test-cmd.run_pod_templates_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_pod_templates_tests
+++ [0114 22:41:59] Creating namespace namespace-1579041719-20287
E0114 22:41:59.459755   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579041719-20287 created
Context "test" modified.
+++ [0114 22:41:59] Testing pod templates
E0114 22:41:59.608386   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1421: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:41:59.734764   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:41:59.840711   51462 controller.go:606] quota admission added evaluator for: podtemplates
podtemplate/nginx created
core.sh:1425: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BNAME    CONTAINERS   IMAGES   POD LABELS
nginx   nginx        nginx    name=nginx
core.sh:1433: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE0114 22:42:00.337195   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
podtemplate "nginx" deleted
E0114 22:42:00.460833   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1437: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ exit code: 0
Recording: run_service_tests
Running command: run_service_tests

+++ Running case: test-cmd.run_service_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_service_tests
E0114 22:42:00.609731   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:42:00] Testing kubectl(v1:services)
core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:42:00.735797   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master created
core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bmatched Name:
matched Labels:
matched Selector:
matched IP:
... skipping 28 lines ...
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(B
E0114 22:42:01.338354   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:868: Successful describe
Name:              redis-master
Namespace:         default
Labels:            app=redis
                   role=master
                   tier=backend
... skipping 3 lines ...
IP:                10.0.0.126
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
(B
E0114 22:42:01.461993   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:870: Successful describe
Name:              redis-master
Namespace:         default
Labels:            app=redis
                   role=master
                   tier=backend
... skipping 4 lines ...
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(B
E0114 22:42:01.610837   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Labels:
matched Selector:
matched IP:
matched Port:
matched Endpoints:
... skipping 25 lines ...
IP:                10.0.0.126
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(BE0114 22:42:01.736918   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
... skipping 120 lines ...
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
service/redis-master selector updated
E0114 22:42:02.339466   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
(BE0114 22:42:02.463105   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master selector updated
E0114 22:42:02.611797   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(BapiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-01-14T22:42:00Z"
  labels:
... skipping 14 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
E0114 22:42:02.738148   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(BI0114 22:42:02.990380   54929 namespace_controller.go:185] Namespace has been deleted test-jobs
service/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
E0114 22:42:03.340651   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:911: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(BE0114 22:42:03.464296   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "redis-master" deleted
E0114 22:42:03.612911   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:918: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:922: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:42:03.739203   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master created
core.sh:926: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bcore.sh:930: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice/service-v1-test created
E0114 22:42:04.341937   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:951: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(BE0114 22:42:04.465508   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/service-v1-test replaced
E0114 22:42:04.614260   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:958: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(BE0114 22:42:04.740286   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "redis-master" deleted
service "service-v1-test" deleted
core.sh:966: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:970: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
E0114 22:42:05.343181   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-slave created
E0114 22:42:05.466548   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:975: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(BE0114 22:42:05.616372   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:NAME           RSRC
kubernetes     144
redis-master   1602
redis-slave    1605
has:redis-master
core.sh:985: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(BE0114 22:42:05.741466   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "redis-master" deleted
service "redis-slave" deleted
core.sh:992: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:996: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/beep-boop created
core.sh:1000: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
(Bcore.sh:1004: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
(BE0114 22:42:06.345336   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "beep-boop" deleted
E0114 22:42:06.467515   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1011: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:42:06.617503   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1015: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bkubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
I0114 22:42:06.723957   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"a9ec0215-9d9d-48e7-ae27-1a1dcc04d103", APIVersion:"apps/v1", ResourceVersion:"1620", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
I0114 22:42:06.729951   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"48871ea7-7337-4324-965f-75e6dde71a62", APIVersion:"apps/v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-9k44n
I0114 22:42:06.733280   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"48871ea7-7337-4324-965f-75e6dde71a62", APIVersion:"apps/v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-tpbkb
service/testmetadata created
deployment.apps/testmetadata created
E0114 22:42:06.742297   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1019: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
(Bcore.sh:1020: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
(Bservice/exposemetadata exposed
core.sh:1026: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
(Bservice "exposemetadata" deleted
service "testmetadata" deleted
deployment.apps "testmetadata" deleted
E0114 22:42:07.346329   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_daemonset_tests
Running command: run_daemonset_tests

+++ Running case: test-cmd.run_daemonset_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_daemonset_tests
+++ [0114 22:42:07] Creating namespace namespace-1579041727-1484
E0114 22:42:07.468684   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579041727-1484 created
Context "test" modified.
+++ [0114 22:42:07] Testing kubectl(v1:daemonsets)
E0114 22:42:07.618676   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:07.743348   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:42:07.875947   51462 controller.go:606] quota admission added evaluator for: daemonsets.apps
daemonset.apps/bind created
I0114 22:42:07.884561   51462 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind configured
apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
(BE0114 22:42:08.347364   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind image updated
E0114 22:42:08.469837   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
(Bdaemonset.apps/bind env updated
E0114 22:42:08.619781   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
(BE0114 22:42:08.744292   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind resource requirements updated
apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
(Bdaemonset.apps/bind restarted
apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
... skipping 2 lines ...

+++ Running case: test-cmd.run_daemonset_history_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_daemonset_history_tests
+++ [0114 22:42:09] Creating namespace namespace-1579041729-16974
namespace/namespace-1579041729-16974 created
E0114 22:42:09.348287   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:42:09] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
E0114 22:42:09.471035   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:09.620866   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind created
E0114 22:42:09.745406   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1579041729-16974"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind configured
E0114 22:42:10.350070   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(BE0114 22:42:10.472127   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(BE0114 22:42:10.621972   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1579041729-16974"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1579041729-16974"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(BE0114 22:42:10.746503   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind will roll back to Pod Template:
  Labels:	service=bind
  Containers:
   kubernetes-pause:
    Image:	k8s.gcr.io/pause:2.0
    Port:	<none>
... skipping 4 lines ...
 (dry run)
apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(BE0114 22:42:11.351219   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BE0114 22:42:11.473418   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
E0114 22:42:11.623071   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(BE0114 22:42:11.747569   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0114 22:42:11.879902   54929 daemon_controller.go:291] namespace-1579041729-16974/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1579041729-16974", SelfLink:"/apis/apps/v1/namespaces/namespace-1579041729-16974/daemonsets/bind", UID:"b430c22f-9156-44b9-a4fc-38125ead7ad7", ResourceVersion:"1692", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714638529, loc:(*time.Location)(0x6b23a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1579041729-16974\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000a671a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029daf58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0013cf500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc000a67220), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0006c25b0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0029dafcc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
E0114 22:42:12.352440   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_rc_tests
Running command: run_rc_tests

+++ Running case: test-cmd.run_rc_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rc_tests
+++ [0114 22:42:12] Creating namespace namespace-1579041732-25537
E0114 22:42:12.474575   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579041732-25537 created
Context "test" modified.
+++ [0114 22:42:12] Testing kubectl(v1:replicationcontrollers)
E0114 22:42:12.624241   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1052: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:12.748606   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0114 22:42:12.899062   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"d0106b3d-38d5-4b5f-9b5b-880e49c7754a", APIVersion:"v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5gcbs
I0114 22:42:12.902720   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"d0106b3d-38d5-4b5f-9b5b-880e49c7754a", APIVersion:"v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2dqfh
I0114 22:42:12.911627   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"d0106b3d-38d5-4b5f-9b5b-880e49c7754a", APIVersion:"v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n6m4z
replicationcontroller "frontend" deleted
core.sh:1057: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1061: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:13.353862   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0114 22:42:13.400060   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"00a5c0c6-ad30-45c2-8c6a-7b3a41099cf9", APIVersion:"v1", ResourceVersion:"1716", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gnwpg
I0114 22:42:13.406098   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"00a5c0c6-ad30-45c2-8c6a-7b3a41099cf9", APIVersion:"v1", ResourceVersion:"1716", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fjgx6
I0114 22:42:13.406267   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"00a5c0c6-ad30-45c2-8c6a-7b3a41099cf9", APIVersion:"v1", ResourceVersion:"1716", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xh475
E0114 22:42:13.475561   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1065: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(Bmatched Name:
matched Pod Template:
E0114 22:42:13.628020   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Labels:
matched Selector:
matched Replicas:
matched Pods Status:
matched Volumes:
matched GET_HOSTS_FROM:
... skipping 2 lines ...
Namespace:    namespace-1579041732-25537
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-gnwpg
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-fjgx6
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-xh475
(BE0114 22:42:13.749703   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1069: Successful describe
Name:         frontend
Namespace:    namespace-1579041732-25537
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1579041732-25537
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1579041732-25537
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1579041732-25537
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1579041732-25537
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1579041732-25537
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 3 lines ...
      cpu:     100m
      memory:  100Mi
    Environment:
      GET_HOSTS_FROM:  dns
    Mounts:            <none>
  Volumes:             <none>
(BE0114 22:42:14.355029   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1579041732-25537
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-gnwpg
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-fjgx6
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-xh475
(BE0114 22:42:14.476934   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1085: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0114 22:42:14.608833   54929 replica_set.go:199] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1579041732-25537 /api/v1/namespaces/namespace-1579041732-25537/replicationcontrollers/frontend 00a5c0c6-ad30-45c2-8c6a-7b3a41099cf9 1727 2 2020-01-14 22:42:13 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000ada108 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0114 22:42:14.615619   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"00a5c0c6-ad30-45c2-8c6a-7b3a41099cf9", APIVersion:"v1", ResourceVersion:"1727", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-gnwpg
E0114 22:42:14.628740   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1089: Successful get rc frontend {{.spec.replicas}}: 2
(BE0114 22:42:14.750773   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1093: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1097: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1101: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0114 22:42:15.187455   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"00a5c0c6-ad30-45c2-8c6a-7b3a41099cf9", APIVersion:"v1", ResourceVersion:"1733", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wgf5b
core.sh:1105: Successful get rc frontend {{.spec.replicas}}: 3
(BE0114 22:42:15.356093   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1109: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0114 22:42:15.462681   54929 replica_set.go:199] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1579041732-25537 /api/v1/namespaces/namespace-1579041732-25537/replicationcontrollers/frontend 00a5c0c6-ad30-45c2-8c6a-7b3a41099cf9 1738 4 2020-01-14 22:42:13 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002eb94b8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0114 22:42:15.467317   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"00a5c0c6-ad30-45c2-8c6a-7b3a41099cf9", APIVersion:"v1", ResourceVersion:"1738", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-wgf5b
E0114 22:42:15.484492   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1113: Successful get rc frontend {{.spec.replicas}}: 2
(BE0114 22:42:15.629917   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller "frontend" deleted
E0114 22:42:15.751719   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-master created
I0114 22:42:15.834084   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-master", UID:"15779a53-dbaf-4945-96be-0ad6ed9542c3", APIVersion:"v1", ResourceVersion:"1748", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-2swjc
replicationcontroller/redis-slave created
I0114 22:42:16.028939   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-slave", UID:"80154912-2fa1-4399-b729-2b05572c488d", APIVersion:"v1", ResourceVersion:"1757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-n6kl7
I0114 22:42:16.034432   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-slave", UID:"80154912-2fa1-4399-b729-2b05572c488d", APIVersion:"v1", ResourceVersion:"1757", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-8gz5h
replicationcontroller/redis-master scaled
... skipping 2 lines ...
I0114 22:42:16.136613   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-master", UID:"15779a53-dbaf-4945-96be-0ad6ed9542c3", APIVersion:"v1", ResourceVersion:"1764", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-2nk89
I0114 22:42:16.136675   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-master", UID:"15779a53-dbaf-4945-96be-0ad6ed9542c3", APIVersion:"v1", ResourceVersion:"1764", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-zhtp4
I0114 22:42:16.138737   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-slave", UID:"80154912-2fa1-4399-b729-2b05572c488d", APIVersion:"v1", ResourceVersion:"1766", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-wtrct
I0114 22:42:16.141149   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-slave", UID:"80154912-2fa1-4399-b729-2b05572c488d", APIVersion:"v1", ResourceVersion:"1766", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-5wkdv
core.sh:1123: Successful get rc redis-master {{.spec.replicas}}: 4
(Bcore.sh:1124: Successful get rc redis-slave {{.spec.replicas}}: 4
(BE0114 22:42:16.357245   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller "redis-master" deleted
replicationcontroller "redis-slave" deleted
E0114 22:42:16.485706   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:42:16.611630   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment", UID:"16ef64d1-27eb-4e61-adf0-da5112f853aa", APIVersion:"apps/v1", ResourceVersion:"1798", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0114 22:42:16.614906   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-6986c7bc94", UID:"3e47d3c4-198a-48d2-bca7-4a4e150bb046", APIVersion:"apps/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-t4g9w
I0114 22:42:16.618010   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-6986c7bc94", UID:"3e47d3c4-198a-48d2-bca7-4a4e150bb046", APIVersion:"apps/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-ndbgc
I0114 22:42:16.618333   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-6986c7bc94", UID:"3e47d3c4-198a-48d2-bca7-4a4e150bb046", APIVersion:"apps/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-xh6b7
E0114 22:42:16.630789   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment scaled
I0114 22:42:16.710479   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment", UID:"16ef64d1-27eb-4e61-adf0-da5112f853aa", APIVersion:"apps/v1", ResourceVersion:"1812", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
I0114 22:42:16.719214   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-6986c7bc94", UID:"3e47d3c4-198a-48d2-bca7-4a4e150bb046", APIVersion:"apps/v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-ndbgc
I0114 22:42:16.719839   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-6986c7bc94", UID:"3e47d3c4-198a-48d2-bca7-4a4e150bb046", APIVersion:"apps/v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-t4g9w
E0114 22:42:16.752638   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
E0114 22:42:17.358356   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:42:17.366841   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment", UID:"1bbe8ac4-1295-4866-8f74-cedc5ba51b81", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0114 22:42:17.371679   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-6986c7bc94", UID:"c289b9c2-c7a3-4ac8-8897-b7d806141be2", APIVersion:"apps/v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-mjz5f
I0114 22:42:17.374355   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-6986c7bc94", UID:"c289b9c2-c7a3-4ac8-8897-b7d806141be2", APIVersion:"apps/v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-l48j5
I0114 22:42:17.374476   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-6986c7bc94", UID:"c289b9c2-c7a3-4ac8-8897-b7d806141be2", APIVersion:"apps/v1", ResourceVersion:"1837", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-l5h42
core.sh:1152: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
(BE0114 22:42:17.486646   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/nginx-deployment exposed
E0114 22:42:17.631838   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1156: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
(Bdeployment.apps "nginx-deployment" deleted
E0114 22:42:17.753952   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "nginx-deployment" deleted
replicationcontroller/frontend created
I0114 22:42:17.946076   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"a5acee92-b5d7-491b-b3df-b094f86af4dd", APIVersion:"v1", ResourceVersion:"1866", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-49prv
I0114 22:42:17.948171   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"a5acee92-b5d7-491b-b3df-b094f86af4dd", APIVersion:"v1", ResourceVersion:"1866", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5zv7k
I0114 22:42:17.949266   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"a5acee92-b5d7-491b-b3df-b094f86af4dd", APIVersion:"v1", ResourceVersion:"1866", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-48fm7
core.sh:1163: Successful get rc frontend {{.spec.replicas}}: 3
(Bservice/frontend exposed
core.sh:1167: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bservice/frontend-2 exposed
E0114 22:42:18.359335   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1171: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
(BE0114 22:42:18.487615   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:18.632966   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/valid-pod created
service/frontend-3 exposed
E0114 22:42:18.754756   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1176: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
(Bservice/frontend-4 exposed
core.sh:1180: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
(Bservice/frontend-5 exposed
core.sh:1184: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
(Bpod "valid-pod" deleted
E0114 22:42:19.360305   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
E0114 22:42:19.488538   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
E0114 22:42:19.634054   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
E0114 22:42:19.755730   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
Successful
message:service/etcd-server exposed
has:etcd-server exposed
core.sh:1214: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
(Bcore.sh:1215: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
(Bservice "etcd-server" deleted
core.sh:1221: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0114 22:42:20.361493   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller "frontend" deleted
core.sh:1225: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:20.489700   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1229: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:20.635168   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:20.756850   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0114 22:42:20.774353   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"a79c67ff-645c-47ae-bab2-d1653b2e1d12", APIVersion:"v1", ResourceVersion:"1930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gs7k4
I0114 22:42:20.776470   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"a79c67ff-645c-47ae-bab2-d1653b2e1d12", APIVersion:"v1", ResourceVersion:"1930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zq5bj
I0114 22:42:20.780681   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"a79c67ff-645c-47ae-bab2-d1653b2e1d12", APIVersion:"v1", ResourceVersion:"1930", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-flzm7
replicationcontroller/redis-slave created
I0114 22:42:20.980787   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-slave", UID:"4526940a-8599-4032-b06e-24f4f7daf6d2", APIVersion:"v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-nrr4r
I0114 22:42:20.983371   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"redis-slave", UID:"4526940a-8599-4032-b06e-24f4f7daf6d2", APIVersion:"v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-vlgv6
core.sh:1234: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Bcore.sh:1238: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Breplicationcontroller "frontend" deleted
replicationcontroller "redis-slave" deleted
E0114 22:42:21.362646   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1242: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:21.490791   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1246: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:21.636165   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0114 22:42:21.690019   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"ab6c59de-210a-46be-ae1c-41d413eb30c6", APIVersion:"v1", ResourceVersion:"1960", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-72xcw
I0114 22:42:21.692911   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"ab6c59de-210a-46be-ae1c-41d413eb30c6", APIVersion:"v1", ResourceVersion:"1960", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4bmd5
I0114 22:42:21.693673   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041732-25537", Name:"frontend", UID:"ab6c59de-210a-46be-ae1c-41d413eb30c6", APIVersion:"v1", ResourceVersion:"1960", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zjf2n
E0114 22:42:21.758059   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1249: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1252: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1256: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(BE0114 22:42:22.363701   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling "frontend" deleted
E0114 22:42:22.491989   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
E0114 22:42:22.637351   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:22.759224   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    name: nginx-deployment-resources
... skipping 22 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0114 22:42:23.110949   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources", UID:"e62d36c3-a43b-46ef-be70-4c4c4336200d", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
I0114 22:42:23.114900   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources-67f8cfff5", UID:"32191e3d-302c-4a93-bca9-987953ff9c87", APIVersion:"apps/v1", ResourceVersion:"1981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-qkf78
I0114 22:42:23.116600   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources-67f8cfff5", UID:"32191e3d-302c-4a93-bca9-987953ff9c87", APIVersion:"apps/v1", ResourceVersion:"1981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-kq8db
I0114 22:42:23.118912   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources-67f8cfff5", UID:"32191e3d-302c-4a93-bca9-987953ff9c87", APIVersion:"apps/v1", ResourceVersion:"1981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-n85hh
core.sh:1271: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1272: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:42:23.364950   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0114 22:42:23.493305   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment-resources resource requirements updated
I0114 22:42:23.562641   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources", UID:"e62d36c3-a43b-46ef-be70-4c4c4336200d", APIVersion:"apps/v1", ResourceVersion:"1994", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
I0114 22:42:23.568057   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources-55c547f795", UID:"f29872bf-bca9-41b8-87eb-46bf69f1699a", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-96jjp
E0114 22:42:23.639668   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1276: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(BE0114 22:42:23.760288   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0114 22:42:23.990208   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources", UID:"e62d36c3-a43b-46ef-be70-4c4c4336200d", APIVersion:"apps/v1", ResourceVersion:"2006", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-55c547f795 to 0
I0114 22:42:23.996111   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources-55c547f795", UID:"f29872bf-bca9-41b8-87eb-46bf69f1699a", APIVersion:"apps/v1", ResourceVersion:"2010", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-55c547f795-96jjp
I0114 22:42:23.997383   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources", UID:"e62d36c3-a43b-46ef-be70-4c4c4336200d", APIVersion:"apps/v1", ResourceVersion:"2008", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
I0114 22:42:24.002753   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources-6d86564b45", UID:"0fac232a-810d-4abf-8839-576c111d90b8", APIVersion:"apps/v1", ResourceVersion:"2014", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-c2gfn
core.sh:1282: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0114 22:42:24.303025   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources", UID:"e62d36c3-a43b-46ef-be70-4c4c4336200d", APIVersion:"apps/v1", ResourceVersion:"2029", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
I0114 22:42:24.309344   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources-67f8cfff5", UID:"32191e3d-302c-4a93-bca9-987953ff9c87", APIVersion:"apps/v1", ResourceVersion:"2033", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-qkf78
I0114 22:42:24.309810   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources", UID:"e62d36c3-a43b-46ef-be70-4c4c4336200d", APIVersion:"apps/v1", ResourceVersion:"2031", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c478d4fdb to 1
I0114 22:42:24.316690   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041732-25537", Name:"nginx-deployment-resources-6c478d4fdb", UID:"ce9b6635-dcaa-45e8-b85d-61732ee475e5", APIVersion:"apps/v1", ResourceVersion:"2037", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c478d4fdb-lnsz6
E0114 22:42:24.366167   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(BE0114 22:42:24.494354   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
(BE0114 22:42:24.640719   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "4"
  creationTimestamp: "2020-01-14T22:42:23Z"
... skipping 65 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
E0114 22:42:24.761573   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1292: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1293: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1294: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
(Bdeployment.apps "nginx-deployment-resources" deleted
+++ exit code: 0
Recording: run_deployment_tests
Running command: run_deployment_tests

+++ Running case: test-cmd.run_deployment_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_deployment_tests
+++ [0114 22:42:25] Creating namespace namespace-1579041745-8816
namespace/namespace-1579041745-8816 created
E0114 22:42:25.367235   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:42:25] Testing deployments
deployment.apps/test-nginx-extensions created
E0114 22:42:25.495360   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:42:25.496438   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"test-nginx-extensions", UID:"e51fc62b-7099-43c8-8f2a-fc11281fe491", APIVersion:"apps/v1", ResourceVersion:"2064", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5559c76db7 to 1
I0114 22:42:25.501825   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"test-nginx-extensions-5559c76db7", UID:"a49f7e40-da85-4889-bfa1-58435ad1f628", APIVersion:"apps/v1", ResourceVersion:"2065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5559c76db7-thrj9
apps.sh:185: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx
(BE0114 22:42:25.641764   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:10
has not:2
E0114 22:42:25.762759   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:apps/v1
has:apps/v1
deployment.apps "test-nginx-extensions" deleted
deployment.apps/test-nginx-apps created
I0114 22:42:25.948988   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"test-nginx-apps", UID:"d1a36f74-77dd-4df3-8ff1-5617f4b47b85", APIVersion:"apps/v1", ResourceVersion:"2081", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-79b9bd9585 to 1
... skipping 21 lines ...
                pod-template-hash=79b9bd9585
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=79b9bd9585
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 3 lines ...
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: test-nginx-apps-79b9bd9585-wldvl
(BE0114 22:42:26.368244   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Image:
matched Node:
matched Labels:
matched Status:
matched Controlled By
... skipping 18 lines ...
    Mounts:       <none>
Volumes:          <none>
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
(BE0114 22:42:26.496568   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "test-nginx-apps" deleted
E0114 22:42:26.642979   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:214: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-with-command created
I0114 22:42:26.738071   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-with-command", UID:"b2018b83-34a5-49a8-b2bb-77e9259a89f3", APIVersion:"apps/v1", ResourceVersion:"2095", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-757c6f58dd to 1
I0114 22:42:26.742563   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-with-command-757c6f58dd", UID:"053aa08d-0568-442d-8f16-4a7ae9feadb0", APIVersion:"apps/v1", ResourceVersion:"2096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-757c6f58dd-q65tm
E0114 22:42:26.763592   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:218: Successful get deploy nginx-with-command {{(index .spec.template.spec.containers 0).name}}: nginx
(Bdeployment.apps "nginx-with-command" deleted
apps.sh:224: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/deployment-with-unixuserid created
I0114 22:42:27.249666   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"deployment-with-unixuserid", UID:"e0c75825-b3b8-4ad0-bc7e-adfdd7fa28c6", APIVersion:"apps/v1", ResourceVersion:"2109", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-8fcdfc94f to 1
I0114 22:42:27.253791   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"deployment-with-unixuserid-8fcdfc94f", UID:"a6b9dc02-2797-452f-8df0-f65ce4aa5362", APIVersion:"apps/v1", ResourceVersion:"2110", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-8fcdfc94f-gw6qg
E0114 22:42:27.369548   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
(Bdeployment.apps "deployment-with-unixuserid" deleted
E0114 22:42:27.497607   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:27.643788   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:42:27.751575   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"fbcf91ed-aa06-4567-abf9-da04bc4f728b", APIVersion:"apps/v1", ResourceVersion:"2125", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0114 22:42:27.755392   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-6986c7bc94", UID:"8a9a4e2d-d2b1-4c91-b3d7-d98c4282811b", APIVersion:"apps/v1", ResourceVersion:"2126", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-k7j9b
I0114 22:42:27.758601   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-6986c7bc94", UID:"8a9a4e2d-d2b1-4c91-b3d7-d98c4282811b", APIVersion:"apps/v1", ResourceVersion:"2126", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-4mb7z
I0114 22:42:27.759046   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-6986c7bc94", UID:"8a9a4e2d-d2b1-4c91-b3d7-d98c4282811b", APIVersion:"apps/v1", ResourceVersion:"2126", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-xfwtf
E0114 22:42:27.764273   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:239: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3
(Bdeployment.apps "nginx-deployment" deleted
apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0114 22:42:28.317145   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"1bf43ee0-fb60-4282-94f7-d908557cf4bd", APIVersion:"apps/v1", ResourceVersion:"2147", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7f6fc565b9 to 1
I0114 22:42:28.319809   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-7f6fc565b9", UID:"6384bf42-cdb3-4bf8-8448-64d7aa9cad1a", APIVersion:"apps/v1", ResourceVersion:"2148", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7f6fc565b9-pm6pr
E0114 22:42:28.370639   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
(Bdeployment.apps "nginx-deployment" deleted
E0114 22:42:28.498641   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:256: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:28.644810   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:257: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
(BE0114 22:42:28.765497   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "nginx-deployment-7f6fc565b9" deleted
apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0114 22:42:29.169172   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"a10a136c-6a9a-4358-be6f-eba8ef9a334b", APIVersion:"apps/v1", ResourceVersion:"2165", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0114 22:42:29.172880   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-6986c7bc94", UID:"a69a4a6e-f611-4958-be8a-8f32be9ccb12", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-pmp2w
I0114 22:42:29.174907   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-6986c7bc94", UID:"a69a4a6e-f611-4958-be8a-8f32be9ccb12", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-mzt4x
I0114 22:42:29.176010   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-6986c7bc94", UID:"a69a4a6e-f611-4958-be8a-8f32be9ccb12", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-cdcfr
apps.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(Bhorizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
E0114 22:42:29.371563   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:271: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(BE0114 22:42:29.499703   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling "nginx-deployment" deleted
deployment.apps "nginx-deployment" deleted
E0114 22:42:29.646045   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:279: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:29.766547   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx created
I0114 22:42:29.921784   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx", UID:"4e5bc4f4-caf9-4018-9ac0-8cb83472eb1e", APIVersion:"apps/v1", ResourceVersion:"2191", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I0114 22:42:29.924422   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-f87d999f7", UID:"54c902ab-744f-4f04-8aa7-76da9f4d065d", APIVersion:"apps/v1", ResourceVersion:"2192", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-qmd84
I0114 22:42:29.929513   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-f87d999f7", UID:"54c902ab-744f-4f04-8aa7-76da9f4d065d", APIVersion:"apps/v1", ResourceVersion:"2192", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-w6zzv
I0114 22:42:29.929554   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-f87d999f7", UID:"54c902ab-744f-4f04-8aa7-76da9f4d065d", APIVersion:"apps/v1", ResourceVersion:"2192", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-spxwh
apps.sh:283: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bapps.sh:284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx skipped rollback (current template already matches revision 1)
apps.sh:287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:42:30.372692   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:30.500869   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/nginx configured
I0114 22:42:30.513272   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx", UID:"4e5bc4f4-caf9-4018-9ac0-8cb83472eb1e", APIVersion:"apps/v1", ResourceVersion:"2205", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-78487f9fd7 to 1
I0114 22:42:30.515762   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-78487f9fd7", UID:"e0fa6dba-fef5-4b7c-9564-8dbc465d60dd", APIVersion:"apps/v1", ResourceVersion:"2206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-lgjb5
apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(BE0114 22:42:30.647132   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
    Image:	k8s.gcr.io/nginx:test-cmd
E0114 22:42:30.767564   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
E0114 22:42:31.373768   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:31.502115   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:31.648195   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:31.768553   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:297: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
E0114 22:42:32.374644   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:32.503252   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:32.649545   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:32.769716   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:33.375844   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:33.504265   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
E0114 22:42:33.650525   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
E0114 22:42:33.770879   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
E0114 22:42:34.376857   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: desired revision (3) is different from the running revision (5)
E0114 22:42:34.505620   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx restarted
I0114 22:42:34.566774   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx", UID:"4e5bc4f4-caf9-4018-9ac0-8cb83472eb1e", APIVersion:"apps/v1", ResourceVersion:"2237", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-f87d999f7 to 2
I0114 22:42:34.572661   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx", UID:"4e5bc4f4-caf9-4018-9ac0-8cb83472eb1e", APIVersion:"apps/v1", ResourceVersion:"2240", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-867d67bd9c to 1
I0114 22:42:34.573998   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-f87d999f7", UID:"54c902ab-744f-4f04-8aa7-76da9f4d065d", APIVersion:"apps/v1", ResourceVersion:"2241", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-f87d999f7-qmd84
I0114 22:42:34.578533   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-867d67bd9c", UID:"554c7957-39f1-4a68-a293-e29a0edbb8ae", APIVersion:"apps/v1", ResourceVersion:"2244", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-867d67bd9c-nwv67
E0114 22:42:34.651544   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:34.771818   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:35.378063   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:35.506771   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:35.652444   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "3"
... skipping 48 lines ...
      terminationGracePeriodSeconds: 30
status:
  fullyLabeledReplicas: 1
  observedGeneration: 2
  replicas: 1
has:deployment.kubernetes.io/revision: "6"
E0114 22:42:35.773168   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx2 created
I0114 22:42:35.966442   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx2", UID:"2c9ecbc8-a269-4f77-b112-2a8955d2917c", APIVersion:"apps/v1", ResourceVersion:"2261", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-57b7865cd9 to 3
I0114 22:42:35.971497   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx2-57b7865cd9", UID:"431a7ce5-186c-425b-8da5-49839dbea1a8", APIVersion:"apps/v1", ResourceVersion:"2262", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-496j2
I0114 22:42:35.973477   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx2-57b7865cd9", UID:"431a7ce5-186c-425b-8da5-49839dbea1a8", APIVersion:"apps/v1", ResourceVersion:"2262", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-zhfgh
I0114 22:42:35.975183   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx2-57b7865cd9", UID:"431a7ce5-186c-425b-8da5-49839dbea1a8", APIVersion:"apps/v1", ResourceVersion:"2262", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-b5s8l
deployment.apps "nginx2" deleted
deployment.apps "nginx" deleted
apps.sh:334: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:36.379207   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:42:36.438428   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"44bb43a3-d9c2-4beb-a2fc-c27b53752e83", APIVersion:"apps/v1", ResourceVersion:"2295", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
I0114 22:42:36.443066   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"a2fec3c4-56de-498b-9b99-c04ad95ae54d", APIVersion:"apps/v1", ResourceVersion:"2296", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-mlrrb
I0114 22:42:36.444839   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"a2fec3c4-56de-498b-9b99-c04ad95ae54d", APIVersion:"apps/v1", ResourceVersion:"2296", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-cqqf7
I0114 22:42:36.447114   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"a2fec3c4-56de-498b-9b99-c04ad95ae54d", APIVersion:"apps/v1", ResourceVersion:"2296", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-klzdb
E0114 22:42:36.507779   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:337: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:42:36.653977   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:339: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0114 22:42:36.774272   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment image updated
I0114 22:42:36.831844   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"44bb43a3-d9c2-4beb-a2fc-c27b53752e83", APIVersion:"apps/v1", ResourceVersion:"2309", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
I0114 22:42:36.837482   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-59df9b5f5b", UID:"c0fa99a2-e81b-4c20-9746-222bf2e6dd96", APIVersion:"apps/v1", ResourceVersion:"2310", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-bm99z
I0114 22:42:36.892756   54929 horizontal.go:353] Horizontal Pod Autoscaler frontend has been deleted in namespace-1579041732-25537
apps.sh:342: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:42:37.380621   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:349: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0114 22:42:37.509228   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment image updated
E0114 22:42:37.655204   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:353: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0114 22:42:37.775488   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:357: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0114 22:42:38.167984   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"44bb43a3-d9c2-4beb-a2fc-c27b53752e83", APIVersion:"apps/v1", ResourceVersion:"2329", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
I0114 22:42:38.175017   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"44bb43a3-d9c2-4beb-a2fc-c27b53752e83", APIVersion:"apps/v1", ResourceVersion:"2332", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7d758dbc54 to 1
I0114 22:42:38.176443   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"a2fec3c4-56de-498b-9b99-c04ad95ae54d", APIVersion:"apps/v1", ResourceVersion:"2333", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-cqqf7
I0114 22:42:38.180771   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-7d758dbc54", UID:"22455f94-a352-4321-945f-6519294c43eb", APIVersion:"apps/v1", ResourceVersion:"2336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7d758dbc54-cn5sh
apps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:42:38.381792   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:361: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:42:38.510350   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:42:38.656973   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:42:38.776590   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment" deleted
apps.sh:371: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0114 22:42:39.080908   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2362", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
I0114 22:42:39.084144   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"d59b95fc-a0e9-4b10-955f-5e0a5d495607", APIVersion:"apps/v1", ResourceVersion:"2363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-xn2fq
I0114 22:42:39.086863   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"d59b95fc-a0e9-4b10-955f-5e0a5d495607", APIVersion:"apps/v1", ResourceVersion:"2363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-s5vbf
I0114 22:42:39.087640   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"d59b95fc-a0e9-4b10-955f-5e0a5d495607", APIVersion:"apps/v1", ResourceVersion:"2363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-7br2g
configmap/test-set-env-config created
E0114 22:42:39.382930   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-set-env-secret created
E0114 22:42:39.511606   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:376: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(BE0114 22:42:39.658323   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:378: Successful get configmaps/test-set-env-config {{.metadata.name}}: test-set-env-config
(BE0114 22:42:39.778062   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:379: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret:
(Bdeployment.apps/nginx-deployment env updated
I0114 22:42:39.928363   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2380", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6b9f7756b4 to 1
I0114 22:42:39.934066   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-6b9f7756b4", UID:"90d8012d-e753-4c57-b002-b9e66b245a7d", APIVersion:"apps/v1", ResourceVersion:"2381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6b9f7756b4-tsh6m
apps.sh:383: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
(Bapps.sh:385: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
(Bdeployment.apps/nginx-deployment env updated
I0114 22:42:40.253667   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2390", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
I0114 22:42:40.261297   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"d59b95fc-a0e9-4b10-955f-5e0a5d495607", APIVersion:"apps/v1", ResourceVersion:"2394", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-xn2fq
I0114 22:42:40.263788   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2392", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-754bf964c8 to 1
I0114 22:42:40.267773   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-754bf964c8", UID:"f54a6f1a-d647-4963-998a-7745b5a69ecd", APIVersion:"apps/v1", ResourceVersion:"2398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-754bf964c8-twv4x
apps.sh:389: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2
(BE0114 22:42:40.384178   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment env updated
I0114 22:42:40.475430   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2411", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 1
I0114 22:42:40.484086   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"d59b95fc-a0e9-4b10-955f-5e0a5d495607", APIVersion:"apps/v1", ResourceVersion:"2415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-s5vbf
I0114 22:42:40.489660   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2414", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-c6d5c5c7b to 1
I0114 22:42:40.492684   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-c6d5c5c7b", UID:"b34c76b5-edd5-4b2c-9030-ec215bdade61", APIVersion:"apps/v1", ResourceVersion:"2419", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-c6d5c5c7b-lqvcc
E0114 22:42:40.512585   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment env updated
I0114 22:42:40.591036   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 0
I0114 22:42:40.596841   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2433", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5958f7687 to 1
I0114 22:42:40.600892   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-598d4d68b4", UID:"d59b95fc-a0e9-4b10-955f-5e0a5d495607", APIVersion:"apps/v1", ResourceVersion:"2435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-7br2g
I0114 22:42:40.600943   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-5958f7687", UID:"0168635e-277b-49d9-956d-6503dd581931", APIVersion:"apps/v1", ResourceVersion:"2438", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5958f7687-4n59s
E0114 22:42:40.659506   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment env updated
I0114 22:42:40.708093   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2447", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6b9f7756b4 to 0
E0114 22:42:40.779235   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:42:40.781490   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment", UID:"aeb27ecf-71e9-4cc5-8c76-8ce574721d55", APIVersion:"apps/v1", ResourceVersion:"2449", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-98b7fd455 to 1
deployment.apps/nginx-deployment env updated
I0114 22:42:40.934588   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041745-8816", Name:"nginx-deployment-6b9f7756b4", UID:"90d8012d-e753-4c57-b002-b9e66b245a7d", APIVersion:"apps/v1", ResourceVersion:"2450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6b9f7756b4-tsh6m
deployment.apps/nginx-deployment env updated
deployment.apps "nginx-deployment" deleted
E0114 22:42:41.081902   54929 replica_set.go:534] sync "namespace-1579041745-8816/nginx-deployment-98b7fd455" failed with replicasets.apps "nginx-deployment-98b7fd455" not found
configmap "test-set-env-config" deleted
secret "test-set-env-secret" deleted
E0114 22:42:41.281293   54929 replica_set.go:534] sync "namespace-1579041745-8816/nginx-deployment-598d4d68b4" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-598d4d68b4": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1579041745-8816/nginx-deployment-598d4d68b4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d59b95fc-a0e9-4b10-955f-5e0a5d495607, UID in object meta: 
+++ exit code: 0
E0114 22:42:41.331930   54929 replica_set.go:534] sync "namespace-1579041745-8816/nginx-deployment-5958f7687" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5958f7687": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1579041745-8816/nginx-deployment-5958f7687, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0168635e-277b-49d9-956d-6503dd581931, UID in object meta: 
Recording: run_rs_tests
Running command: run_rs_tests
E0114 22:42:41.385364   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0114 22:42:41] Creating namespace namespace-1579041761-31836
E0114 22:42:41.433884   54929 replica_set.go:534] sync "namespace-1579041745-8816/nginx-deployment-6b9f7756b4" failed with replicasets.apps "nginx-deployment-6b9f7756b4" not found
E0114 22:42:41.481388   54929 replica_set.go:534] sync "namespace-1579041745-8816/nginx-deployment-d74969475" failed with replicasets.apps "nginx-deployment-d74969475" not found
namespace/namespace-1579041761-31836 created
E0114 22:42:41.513619   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:41.530998   54929 replica_set.go:534] sync "namespace-1579041745-8816/nginx-deployment-868b664cb5" failed with replicasets.apps "nginx-deployment-868b664cb5" not found
Context "test" modified.
+++ [0114 22:42:41] Testing kubectl(v1:replicasets)
E0114 22:42:41.660778   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:511: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:41.780412   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:42:41.909302   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"5016bf12-f85f-4f6b-a497-51ab137f5636", APIVersion:"apps/v1", ResourceVersion:"2484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hgjwp
I0114 22:42:41.912940   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"5016bf12-f85f-4f6b-a497-51ab137f5636", APIVersion:"apps/v1", ResourceVersion:"2484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s6xnq
I0114 22:42:41.913436   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"5016bf12-f85f-4f6b-a497-51ab137f5636", APIVersion:"apps/v1", ResourceVersion:"2484", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sh64f
+++ [0114 22:42:41] Deleting rs
E0114 22:42:42.032500   54929 replica_set.go:534] sync "namespace-1579041761-31836/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1579041761-31836/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 5016bf12-f85f-4f6b-a497-51ab137f5636, UID in object meta: 
replicaset.apps "frontend" deleted
apps.sh:517: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:521: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:42.386660   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:42:42.486611   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"874ef244-638c-4aab-9865-50ce0fda2b58", APIVersion:"apps/v1", ResourceVersion:"2499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qq4c4
I0114 22:42:42.488202   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"874ef244-638c-4aab-9865-50ce0fda2b58", APIVersion:"apps/v1", ResourceVersion:"2499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j9pgn
I0114 22:42:42.489877   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"874ef244-638c-4aab-9865-50ce0fda2b58", APIVersion:"apps/v1", ResourceVersion:"2499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zzjzz
E0114 22:42:42.516306   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:525: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0114 22:42:42] Deleting rs
E0114 22:42:42.661955   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "frontend" deleted
E0114 22:42:42.784651   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:42.784667   54929 replica_set.go:534] sync "namespace-1579041761-31836/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1579041761-31836/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 874ef244-638c-4aab-9865-50ce0fda2b58, UID in object meta: 
apps.sh:529: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:531: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-j9pgn" deleted
pod "frontend-qq4c4" deleted
pod "frontend-zzjzz" deleted
apps.sh:534: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:538: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:43.387856   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:42:43.472243   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"0c7bd54b-9059-438a-86f2-efc594d60ace", APIVersion:"apps/v1", ResourceVersion:"2521", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5f2t5
I0114 22:42:43.474684   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"0c7bd54b-9059-438a-86f2-efc594d60ace", APIVersion:"apps/v1", ResourceVersion:"2521", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-d4k79
I0114 22:42:43.475932   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"0c7bd54b-9059-438a-86f2-efc594d60ace", APIVersion:"apps/v1", ResourceVersion:"2521", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mq4vl
E0114 22:42:43.517496   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:542: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0114 22:42:43.662953   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Pod Template:
matched Labels:
matched Selector:
matched Replicas:
matched Pods Status:
... skipping 3 lines ...
Namespace:    namespace-1579041761-31836
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-5f2t5
  Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-d4k79
  Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-mq4vl
(BE0114 22:42:43.785938   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:546: Successful describe
Name:         frontend
Namespace:    namespace-1579041761-31836
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1579041761-31836
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1579041761-31836
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1579041761-31836
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1579041761-31836
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-5f2t5
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-d4k79
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-mq4vl
(BI0114 22:42:44.360373   54929 horizontal.go:353] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1579041745-8816
E0114 22:42:44.388876   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1579041761-31836
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 3 lines ...
      cpu:     100m
      memory:  100Mi
    Environment:
      GET_HOSTS_FROM:  dns
    Mounts:            <none>
  Volumes:             <none>
(BE0114 22:42:44.518519   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1579041761-31836
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 101 lines ...
    Mounts:            <none>
Volumes:               <none>
QoS Class:             Burstable
Node-Selectors:        <none>
Tolerations:           <none>
Events:                <none>
(BE0114 22:42:44.664002   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:564: Successful get rs frontend {{.spec.replicas}}: 3
(BE0114 22:42:44.787042   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend scaled
E0114 22:42:44.844033   54929 replica_set.go:199] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1579041761-31836 /apis/apps/v1/namespaces/namespace-1579041761-31836/replicasets/frontend 0c7bd54b-9059-438a-86f2-efc594d60ace 2532 2 2020-01-14 22:42:43 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v3 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00136cec8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0114 22:42:44.848868   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"0c7bd54b-9059-438a-86f2-efc594d60ace", APIVersion:"apps/v1", ResourceVersion:"2532", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-5f2t5
apps.sh:568: Successful get rs frontend {{.spec.replicas}}: 2
(Bdeployment.apps/scale-1 created
I0114 22:42:45.118720   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041761-31836", Name:"scale-1", UID:"2baad551-0dfb-4860-a785-e81b6c37013f", APIVersion:"apps/v1", ResourceVersion:"2538", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 1
I0114 22:42:45.122370   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"scale-1-5c5565bcd9", UID:"8e1e7ebd-dd01-4e42-97aa-a8082079936f", APIVersion:"apps/v1", ResourceVersion:"2539", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-ml72j
deployment.apps/scale-2 created
I0114 22:42:45.307106   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041761-31836", Name:"scale-2", UID:"d4112765-8940-42a6-b093-fe576e2c16f5", APIVersion:"apps/v1", ResourceVersion:"2548", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 1
I0114 22:42:45.313418   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"scale-2-5c5565bcd9", UID:"54bb5469-ed1f-45ea-b507-635aa7b09fb6", APIVersion:"apps/v1", ResourceVersion:"2549", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-gbc5w
E0114 22:42:45.390021   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-3 created
I0114 22:42:45.511309   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041761-31836", Name:"scale-3", UID:"b4a9c80d-4744-4d86-8a2d-d356ab593048", APIVersion:"apps/v1", ResourceVersion:"2558", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 1
I0114 22:42:45.517130   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"scale-3-5c5565bcd9", UID:"dae3a5b3-3fa4-4b4f-8f23-54b70eaadfc8", APIVersion:"apps/v1", ResourceVersion:"2559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-jjskf
E0114 22:42:45.519527   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:574: Successful get deploy scale-1 {{.spec.replicas}}: 1
(BE0114 22:42:45.665171   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:575: Successful get deploy scale-2 {{.spec.replicas}}: 1
(BE0114 22:42:45.788064   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:576: Successful get deploy scale-3 {{.spec.replicas}}: 1
(Bdeployment.apps/scale-1 scaled
I0114 22:42:45.902044   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041761-31836", Name:"scale-1", UID:"2baad551-0dfb-4860-a785-e81b6c37013f", APIVersion:"apps/v1", ResourceVersion:"2571", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 2
deployment.apps/scale-2 scaled
I0114 22:42:45.908519   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"scale-1-5c5565bcd9", UID:"8e1e7ebd-dd01-4e42-97aa-a8082079936f", APIVersion:"apps/v1", ResourceVersion:"2572", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-fhkmk
I0114 22:42:45.910161   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041761-31836", Name:"scale-2", UID:"d4112765-8940-42a6-b093-fe576e2c16f5", APIVersion:"apps/v1", ResourceVersion:"2573", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 2
... skipping 8 lines ...
deployment.apps/scale-3 scaled
I0114 22:42:46.293531   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041761-31836", Name:"scale-2", UID:"d4112765-8940-42a6-b093-fe576e2c16f5", APIVersion:"apps/v1", ResourceVersion:"2593", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 3
I0114 22:42:46.295796   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"scale-2-5c5565bcd9", UID:"54bb5469-ed1f-45ea-b507-635aa7b09fb6", APIVersion:"apps/v1", ResourceVersion:"2599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-wxjz5
I0114 22:42:46.297446   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041761-31836", Name:"scale-3", UID:"b4a9c80d-4744-4d86-8a2d-d356ab593048", APIVersion:"apps/v1", ResourceVersion:"2600", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 3
I0114 22:42:46.301683   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"scale-3-5c5565bcd9", UID:"dae3a5b3-3fa4-4b4f-8f23-54b70eaadfc8", APIVersion:"apps/v1", ResourceVersion:"2604", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-k5jhb
I0114 22:42:46.304700   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"scale-3-5c5565bcd9", UID:"dae3a5b3-3fa4-4b4f-8f23-54b70eaadfc8", APIVersion:"apps/v1", ResourceVersion:"2604", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-s94c6
E0114 22:42:46.391037   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:584: Successful get deploy scale-1 {{.spec.replicas}}: 3
(Bapps.sh:585: Successful get deploy scale-2 {{.spec.replicas}}: 3
(BE0114 22:42:46.520513   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:586: Successful get deploy scale-3 {{.spec.replicas}}: 3
(BE0114 22:42:46.666239   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "frontend" deleted
deployment.apps "scale-1" deleted
deployment.apps "scale-2" deleted
deployment.apps "scale-3" deleted
E0114 22:42:46.788888   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:42:46.966360   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"a0f6d80e-91e9-437c-a050-5d90ef2e3da7", APIVersion:"apps/v1", ResourceVersion:"2652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s4d8p
I0114 22:42:46.969568   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"a0f6d80e-91e9-437c-a050-5d90ef2e3da7", APIVersion:"apps/v1", ResourceVersion:"2652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pwm59
I0114 22:42:46.971757   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"a0f6d80e-91e9-437c-a050-5d90ef2e3da7", APIVersion:"apps/v1", ResourceVersion:"2652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9rkrq
apps.sh:594: Successful get rs frontend {{.spec.replicas}}: 3
(Bservice/frontend exposed
apps.sh:598: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bservice/frontend-2 exposed
E0114 22:42:47.392302   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:602: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
(BE0114 22:42:47.521943   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "frontend" deleted
service "frontend-2" deleted
apps.sh:608: Successful get rs frontend {{.metadata.generation}}: 1
(BE0114 22:42:47.667237   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend image updated
E0114 22:42:47.790083   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:610: Successful get rs frontend {{.metadata.generation}}: 2
(Breplicaset.apps/frontend env updated
apps.sh:612: Successful get rs frontend {{.metadata.generation}}: 3
(Breplicaset.apps/frontend resource requirements updated
apps.sh:614: Successful get rs frontend {{.metadata.generation}}: 4
(Bapps.sh:618: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0114 22:42:48.393561   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "frontend" deleted
E0114 22:42:48.522639   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:622: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:626: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:48.668503   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:48.791245   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:42:48.798686   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"4fc346d2-32f7-4554-a2da-b80206996b05", APIVersion:"apps/v1", ResourceVersion:"2688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n472c
I0114 22:42:48.800753   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"4fc346d2-32f7-4554-a2da-b80206996b05", APIVersion:"apps/v1", ResourceVersion:"2688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4lj6v
I0114 22:42:48.801168   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"4fc346d2-32f7-4554-a2da-b80206996b05", APIVersion:"apps/v1", ResourceVersion:"2688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pbdww
replicaset.apps/redis-slave created
I0114 22:42:48.992813   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"redis-slave", UID:"e042bc26-b537-46ac-af30-5a028a2a5758", APIVersion:"apps/v1", ResourceVersion:"2697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-h2rvk
I0114 22:42:48.996069   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"redis-slave", UID:"e042bc26-b537-46ac-af30-5a028a2a5758", APIVersion:"apps/v1", ResourceVersion:"2697", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-hktqd
apps.sh:631: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Bapps.sh:635: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Breplicaset.apps "frontend" deleted
replicaset.apps "redis-slave" deleted
apps.sh:639: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:49.394687   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:644: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:49.523711   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:42:49.649172   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"33818f27-8001-4e4c-a523-f6676e128321", APIVersion:"apps/v1", ResourceVersion:"2716", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s57ts
I0114 22:42:49.652326   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"33818f27-8001-4e4c-a523-f6676e128321", APIVersion:"apps/v1", ResourceVersion:"2716", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8vxp4
I0114 22:42:49.652796   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041761-31836", Name:"frontend", UID:"33818f27-8001-4e4c-a523-f6676e128321", APIVersion:"apps/v1", ResourceVersion:"2716", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9qmlw
E0114 22:42:49.669367   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:647: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0114 22:42:49.792215   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:650: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:654: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
E0114 22:42:50.395705   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests
E0114 22:42:50.524820   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource

+++ Running case: test-cmd.run_stateful_set_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_stateful_set_tests
+++ [0114 22:42:50] Creating namespace namespace-1579041770-32368
namespace/namespace-1579041770-32368 created
E0114 22:42:50.670392   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:42:50] Testing kubectl(v1:statefulsets)
E0114 22:42:50.793480   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:470: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0114 22:42:51.022043   51462 controller.go:606] quota admission added evaluator for: statefulsets.apps
statefulset.apps/nginx created
apps.sh:476: Successful get statefulset nginx {{.spec.replicas}}: 0
(Bapps.sh:477: Successful get statefulset nginx {{.status.observedGeneration}}: 1
(Bstatefulset.apps/nginx scaled
I0114 22:42:51.334393   54929 event.go:278] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"namespace-1579041770-32368", Name:"nginx", UID:"fb3dc536-d6dd-4494-aed8-61dc943f2dcb", APIVersion:"apps/v1", ResourceVersion:"2743", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod nginx-0 in StatefulSet nginx successful
E0114 22:42:51.396941   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:481: Successful get statefulset nginx {{.spec.replicas}}: 1
(BE0114 22:42:51.526042   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:482: Successful get statefulset nginx {{.status.observedGeneration}}: 2
(BE0114 22:42:51.671531   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx restarted
E0114 22:42:51.794352   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:490: Successful get statefulset nginx {{.status.observedGeneration}}: 3
(Bstatefulset.apps "nginx" deleted
I0114 22:42:51.959353   54929 stateful_set.go:420] StatefulSet has been deleted namespace-1579041770-32368/nginx
+++ exit code: 0
Recording: run_statefulset_history_tests
Running command: run_statefulset_history_tests
... skipping 2 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_statefulset_history_tests
+++ [0114 22:42:52] Creating namespace namespace-1579041772-18247
namespace/namespace-1579041772-18247 created
Context "test" modified.
+++ [0114 22:42:52] Testing kubectl(v1:statefulsets, v1:controllerrevisions)
E0114 22:42:52.398142   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:418: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:52.527163   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx created
E0114 22:42:52.672336   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:422: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1579041772-18247"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(BE0114 22:42:52.795428   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx skipped rollback (current template already matches revision 1)
apps.sh:425: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:426: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx configured
E0114 22:42:53.399400   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:429: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(BE0114 22:42:53.528325   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:430: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:431: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(BE0114 22:42:53.673539   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:432: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1579041772-18247"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1579041772-18247"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.8","name":"nginx","ports":[{"containerPort":80,"name":"web"}]},{"image":"k8s.gcr.io/pause:2.0","name":"pause","ports":[{"containerPort":81,"name":"web-2"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(BE0114 22:42:53.796490   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx will roll back to Pod Template:
  Labels:	app=nginx-statefulset
  Containers:
   nginx:
    Image:	k8s.gcr.io/nginx-slim:0.7
    Port:	80/TCP
... skipping 8 lines ...
 (dry run)
apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:436: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:437: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(BE0114 22:42:54.400591   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BE0114 22:42:54.529513   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(BE0114 22:42:54.674658   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:446: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BE0114 22:42:54.797770   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx rolled back
apps.sh:449: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:450: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:451: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps "nginx" deleted
I0114 22:42:55.259745   54929 stateful_set.go:420] StatefulSet has been deleted namespace-1579041772-18247/nginx
+++ exit code: 0
E0114 22:42:55.401808   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_lists_tests
Running command: run_lists_tests

+++ Running case: test-cmd.run_lists_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_lists_tests
+++ [0114 22:42:55] Creating namespace namespace-1579041775-20310
E0114 22:42:55.530587   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579041775-20310 created
Context "test" modified.
+++ [0114 22:42:55] Testing kubectl(v1:lists)
E0114 22:42:55.675758   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:55.798895   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/list-service-test created
deployment.apps/list-deployment-test created
I0114 22:42:55.827059   54929 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579041775-20310", Name:"list-deployment-test", UID:"91abc43e-f220-4b77-b708-b93fd9ec16cc", APIVersion:"apps/v1", ResourceVersion:"2784", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set list-deployment-test-7cd8c5ff6d to 1
I0114 22:42:55.832904   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579041775-20310", Name:"list-deployment-test-7cd8c5ff6d", UID:"62076f83-be8f-45f8-a0d5-6eeee3543eef", APIVersion:"apps/v1", ResourceVersion:"2785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: list-deployment-test-7cd8c5ff6d-fwwzn
service "list-service-test" deleted
deployment.apps "list-deployment-test" deleted
... skipping 8 lines ...
namespace/namespace-1579041776-25246 created
Context "test" modified.
+++ [0114 22:42:56] Testing kubectl(v1:multiple resources)
Testing with file hack/testdata/multi-resource-yaml.yaml and replace with file hack/testdata/multi-resource-yaml-modify.yaml
generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:42:56.402823   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:42:56.531770   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/mock created
replicationcontroller/mock created
I0114 22:42:56.575730   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041776-25246", Name:"mock", UID:"89e3c915-8f52-4712-a830-692a192d1625", APIVersion:"v1", ResourceVersion:"2806", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-6jvmr
E0114 22:42:56.676757   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:
(Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:
(BE0114 22:42:56.799950   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/mock   ClusterIP   10.0.0.183   <none>        99/TCP    0s

NAME                         DESIRED   CURRENT   READY   AGE
replicationcontroller/mock   1         1         0       0s
Name:              mock
... skipping 13 lines ...
Name:         mock
Namespace:    namespace-1579041776-25246
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 7 lines ...
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: mock-6jvmr
service "mock" deleted
replicationcontroller "mock" deleted
service/mock replaced
replicationcontroller/mock replaced
I0114 22:42:57.309324   54929 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579041776-25246", Name:"mock", UID:"deaf436a-454d-40ba-8c50-feb30aa534b9", APIVersion:"v1", ResourceVersion:"2820", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-62lbp
E0114 22:42:57.403927   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced
(BE0114 22:42:57.532937   54929 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMeta