This job view page is being replaced by Spyglass soon. Check out the new job view.
PRknight42: Fix race condition in pluginWatcher
ResultFAILURE
Tests 1 failed / 2946 succeeded
Started2020-08-01 04:03
Elapsed52m26s
Revisione5cfa79a69de901eb79ba739fc868da443bd8f19
Refs 93622
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/85168ed8-f73b-4361-bc51-425a7ba30bd3/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/85168ed8-f73b-4361-bc51-425a7ba30bd3/targets/test

Test Failures


k8s.io/kubernetes/test/integration/deployment TestDeploymentAvailableCondition 7.01s

go test -v k8s.io/kubernetes/test/integration/deployment -run TestDeploymentAvailableCondition$
=== RUN   TestDeploymentAvailableCondition
W0801 04:48:07.562172  116165 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0801 04:48:07.562203  116165 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0801 04:48:07.562218  116165 master.go:315] Node port range unspecified. Defaulting to 30000-32767.
I0801 04:48:07.562244  116165 master.go:271] Using reconciler: 
I0801 04:48:07.562444  116165 config.go:637] Not requested to run hook priority-and-fairness-config-consumer
I0801 04:48:07.563628  116165 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.563820  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.564002  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.565362  116165 store.go:1378] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0801 04:48:07.565438  116165 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.565506  116165 reflector.go:243] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0801 04:48:07.565846  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.565867  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.566814  116165 store.go:1378] Monitoring events count at <storage-prefix>//events
I0801 04:48:07.566857  116165 reflector.go:243] Listing and watching *core.Event from storage/cacher.go:/events
I0801 04:48:07.566882  116165 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.567073  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.567102  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.567755  116165 store.go:1378] Monitoring limitranges count at <storage-prefix>//limitranges
I0801 04:48:07.567957  116165 cacher.go:402] cacher (*core.PodTemplate): initialized
I0801 04:48:07.567980  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.567946  116165 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.568051  116165 cacher.go:402] cacher (*core.Event): initialized
I0801 04:48:07.568063  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.568080  116165 reflector.go:243] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0801 04:48:07.568146  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.568163  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.569377  116165 store.go:1378] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0801 04:48:07.569416  116165 reflector.go:243] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0801 04:48:07.569970  116165 cacher.go:402] cacher (*core.LimitRange): initialized
I0801 04:48:07.569992  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.576343  116165 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.576909  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.577024  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.577424  116165 cacher.go:402] cacher (*core.ResourceQuota): initialized
I0801 04:48:07.577448  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.578125  116165 store.go:1378] Monitoring secrets count at <storage-prefix>//secrets
I0801 04:48:07.578167  116165 reflector.go:243] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0801 04:48:07.578324  116165 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.578446  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.578468  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.579117  116165 cacher.go:402] cacher (*core.Secret): initialized
I0801 04:48:07.579137  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.579147  116165 store.go:1378] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0801 04:48:07.579181  116165 reflector.go:243] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0801 04:48:07.579389  116165 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.579557  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.579585  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.579896  116165 cacher.go:402] cacher (*core.PersistentVolume): initialized
I0801 04:48:07.579913  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.581147  116165 store.go:1378] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0801 04:48:07.581215  116165 reflector.go:243] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0801 04:48:07.581333  116165 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.588078  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.588116  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.589067  116165 store.go:1378] Monitoring configmaps count at <storage-prefix>//configmaps
I0801 04:48:07.589132  116165 reflector.go:243] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0801 04:48:07.589303  116165 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.589394  116165 cacher.go:402] cacher (*core.PersistentVolumeClaim): initialized
I0801 04:48:07.589419  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.589440  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.589456  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.590290  116165 store.go:1378] Monitoring namespaces count at <storage-prefix>//namespaces
I0801 04:48:07.590312  116165 cacher.go:402] cacher (*core.ConfigMap): initialized
I0801 04:48:07.590431  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.590348  116165 reflector.go:243] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0801 04:48:07.590595  116165 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.590732  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.590750  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.591341  116165 cacher.go:402] cacher (*core.Namespace): initialized
I0801 04:48:07.591366  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.597743  116165 store.go:1378] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0801 04:48:07.597880  116165 reflector.go:243] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0801 04:48:07.597975  116165 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.598221  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.598253  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.599375  116165 store.go:1378] Monitoring nodes count at <storage-prefix>//minions
I0801 04:48:07.599495  116165 reflector.go:243] Listing and watching *core.Node from storage/cacher.go:/minions
I0801 04:48:07.599599  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.599767  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.599872  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.599967  116165 cacher.go:402] cacher (*core.Endpoints): initialized
I0801 04:48:07.599989  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.600179  116165 cacher.go:402] cacher (*core.Node): initialized
I0801 04:48:07.600227  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.601470  116165 store.go:1378] Monitoring pods count at <storage-prefix>//pods
I0801 04:48:07.601655  116165 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.601757  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.601794  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.602012  116165 reflector.go:243] Listing and watching *core.Pod from storage/cacher.go:/pods
I0801 04:48:07.602777  116165 store.go:1378] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0801 04:48:07.602828  116165 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.602862  116165 cacher.go:402] cacher (*core.Pod): initialized
I0801 04:48:07.602877  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.602933  116165 reflector.go:243] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0801 04:48:07.602948  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.603192  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.603897  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.603929  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.604794  116165 cacher.go:402] cacher (*core.ServiceAccount): initialized
I0801 04:48:07.604939  116165 watch_cache.go:521] Replace watchCache (rev: 27583) 
I0801 04:48:07.610575  116165 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.610754  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.610861  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.613885  116165 store.go:1378] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0801 04:48:07.614064  116165 reflector.go:243] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0801 04:48:07.614222  116165 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.614558  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.614590  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.615577  116165 store.go:1378] Monitoring services count at <storage-prefix>//services/specs
I0801 04:48:07.615609  116165 reflector.go:243] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0801 04:48:07.616790  116165 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.617132  116165 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.617655  116165 cacher.go:402] cacher (*core.ReplicationController): initialized
I0801 04:48:07.617774  116165 watch_cache.go:521] Replace watchCache (rev: 27585) 
I0801 04:48:07.618406  116165 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.624519  116165 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.625824  116165 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.626665  116165 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.627723  116165 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.628057  116165 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.628420  116165 cacher.go:402] cacher (*core.Service): initialized
I0801 04:48:07.628451  116165 watch_cache.go:521] Replace watchCache (rev: 27585) 
I0801 04:48:07.628858  116165 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.629638  116165 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.635153  116165 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.635536  116165 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.636438  116165 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.637396  116165 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.638202  116165 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.638505  116165 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.644401  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.645173  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.645329  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.645467  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.645659  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.645829  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.646005  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.646758  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.647170  116165 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.654742  116165 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.655708  116165 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.656065  116165 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.656490  116165 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.661478  116165 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.661779  116165 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.662492  116165 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.663227  116165 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.669411  116165 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.670333  116165 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.670647  116165 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.670751  116165 master.go:539] Enabling API group "authentication.k8s.io".
I0801 04:48:07.670780  116165 master.go:539] Enabling API group "authorization.k8s.io".
I0801 04:48:07.670978  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.671123  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.671146  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.672065  116165 store.go:1378] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0801 04:48:07.672186  116165 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0801 04:48:07.672815  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.672952  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.672977  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.679505  116165 cacher.go:402] cacher (*autoscaling.HorizontalPodAutoscaler): initialized
I0801 04:48:07.679533  116165 watch_cache.go:521] Replace watchCache (rev: 27592) 
I0801 04:48:07.680172  116165 store.go:1378] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0801 04:48:07.681375  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.680283  116165 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0801 04:48:07.681725  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.681824  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.686778  116165 cacher.go:402] cacher (*autoscaling.HorizontalPodAutoscaler): initialized
I0801 04:48:07.686804  116165 watch_cache.go:521] Replace watchCache (rev: 27594) 
I0801 04:48:07.687841  116165 store.go:1378] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0801 04:48:07.687943  116165 reflector.go:243] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0801 04:48:07.688101  116165 master.go:539] Enabling API group "autoscaling".
I0801 04:48:07.689493  116165 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.689690  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.689824  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.690764  116165 store.go:1378] Monitoring jobs.batch count at <storage-prefix>//jobs
I0801 04:48:07.691103  116165 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.691296  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.691329  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.691556  116165 reflector.go:243] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0801 04:48:07.693536  116165 store.go:1378] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0801 04:48:07.693733  116165 master.go:539] Enabling API group "batch".
I0801 04:48:07.693613  116165 reflector.go:243] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0801 04:48:07.694495  116165 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.698872  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.698995  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.700086  116165 store.go:1378] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0801 04:48:07.700189  116165 reflector.go:243] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0801 04:48:07.701131  116165 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.701353  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.701444  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.702382  116165 store.go:1378] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0801 04:48:07.702464  116165 reflector.go:243] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0801 04:48:07.702467  116165 master.go:539] Enabling API group "certificates.k8s.io".
I0801 04:48:07.702794  116165 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.702943  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.703021  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.703341  116165 cacher.go:402] cacher (*autoscaling.HorizontalPodAutoscaler): initialized
I0801 04:48:07.703393  116165 watch_cache.go:521] Replace watchCache (rev: 27595) 
I0801 04:48:07.705090  116165 cacher.go:402] cacher (*certificates.CertificateSigningRequest): initialized
I0801 04:48:07.705110  116165 watch_cache.go:521] Replace watchCache (rev: 27596) 
I0801 04:48:07.705228  116165 cacher.go:402] cacher (*certificates.CertificateSigningRequest): initialized
I0801 04:48:07.705242  116165 watch_cache.go:521] Replace watchCache (rev: 27596) 
I0801 04:48:07.705276  116165 cacher.go:402] cacher (*batch.CronJob): initialized
I0801 04:48:07.705293  116165 watch_cache.go:521] Replace watchCache (rev: 27596) 
I0801 04:48:07.706109  116165 store.go:1378] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0801 04:48:07.706251  116165 reflector.go:243] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0801 04:48:07.706454  116165 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.706647  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.706681  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.706804  116165 cacher.go:402] cacher (*batch.Job): initialized
I0801 04:48:07.706907  116165 watch_cache.go:521] Replace watchCache (rev: 27596) 
I0801 04:48:07.715717  116165 store.go:1378] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0801 04:48:07.715747  116165 master.go:539] Enabling API group "coordination.k8s.io".
I0801 04:48:07.716485  116165 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.716717  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.716747  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.717322  116165 reflector.go:243] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0801 04:48:07.718727  116165 cacher.go:402] cacher (*coordination.Lease): initialized
I0801 04:48:07.718809  116165 watch_cache.go:521] Replace watchCache (rev: 27596) 
I0801 04:48:07.719041  116165 cacher.go:402] cacher (*coordination.Lease): initialized
I0801 04:48:07.719059  116165 watch_cache.go:521] Replace watchCache (rev: 27596) 
I0801 04:48:07.720298  116165 store.go:1378] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0801 04:48:07.720377  116165 master.go:539] Enabling API group "discovery.k8s.io".
I0801 04:48:07.720635  116165 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.721102  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.721123  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.721302  116165 reflector.go:243] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0801 04:48:07.722888  116165 store.go:1378] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0801 04:48:07.722921  116165 master.go:539] Enabling API group "extensions".
I0801 04:48:07.723204  116165 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.723254  116165 cacher.go:402] cacher (*discovery.EndpointSlice): initialized
I0801 04:48:07.723272  116165 watch_cache.go:521] Replace watchCache (rev: 27598) 
I0801 04:48:07.723377  116165 reflector.go:243] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0801 04:48:07.723530  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.723554  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.725909  116165 cacher.go:402] cacher (*networking.Ingress): initialized
I0801 04:48:07.725936  116165 watch_cache.go:521] Replace watchCache (rev: 27598) 
I0801 04:48:07.731575  116165 store.go:1378] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0801 04:48:07.731879  116165 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.731951  116165 reflector.go:243] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0801 04:48:07.732077  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.732121  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.733541  116165 cacher.go:402] cacher (*networking.NetworkPolicy): initialized
I0801 04:48:07.733565  116165 watch_cache.go:521] Replace watchCache (rev: 27599) 
I0801 04:48:07.733865  116165 store.go:1378] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0801 04:48:07.733915  116165 reflector.go:243] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0801 04:48:07.734120  116165 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.734253  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.734273  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.735079  116165 cacher.go:402] cacher (*networking.Ingress): initialized
I0801 04:48:07.735101  116165 watch_cache.go:521] Replace watchCache (rev: 27599) 
I0801 04:48:07.742351  116165 store.go:1378] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses
I0801 04:48:07.742530  116165 reflector.go:243] Listing and watching *networking.IngressClass from storage/cacher.go:/ingressclasses
I0801 04:48:07.742578  116165 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.742769  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.742809  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.743753  116165 cacher.go:402] cacher (*networking.IngressClass): initialized
I0801 04:48:07.743775  116165 watch_cache.go:521] Replace watchCache (rev: 27599) 
I0801 04:48:07.750837  116165 store.go:1378] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0801 04:48:07.750880  116165 reflector.go:243] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0801 04:48:07.751859  116165 cacher.go:402] cacher (*networking.Ingress): initialized
I0801 04:48:07.751881  116165 watch_cache.go:521] Replace watchCache (rev: 27599) 
I0801 04:48:07.763809  116165 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.764031  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.764161  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.779687  116165 store.go:1378] Monitoring ingressclasses.networking.k8s.io count at <storage-prefix>//ingressclasses
I0801 04:48:07.779722  116165 master.go:539] Enabling API group "networking.k8s.io".
I0801 04:48:07.779831  116165 reflector.go:243] Listing and watching *networking.IngressClass from storage/cacher.go:/ingressclasses
I0801 04:48:07.780022  116165 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.780184  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.780225  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.780986  116165 store.go:1378] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0801 04:48:07.781081  116165 reflector.go:243] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0801 04:48:07.781258  116165 master.go:539] Enabling API group "node.k8s.io".
I0801 04:48:07.781668  116165 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.781829  116165 cacher.go:402] cacher (*networking.IngressClass): initialized
I0801 04:48:07.781872  116165 watch_cache.go:521] Replace watchCache (rev: 27601) 
I0801 04:48:07.782055  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.782142  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.782386  116165 cacher.go:402] cacher (*node.RuntimeClass): initialized
I0801 04:48:07.782707  116165 watch_cache.go:521] Replace watchCache (rev: 27601) 
I0801 04:48:07.789964  116165 store.go:1378] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0801 04:48:07.790042  116165 reflector.go:243] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0801 04:48:07.790199  116165 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.790415  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.790437  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.791259  116165 store.go:1378] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0801 04:48:07.791285  116165 master.go:539] Enabling API group "policy".
I0801 04:48:07.791353  116165 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.791585  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.791612  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.791818  116165 reflector.go:243] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0801 04:48:07.793105  116165 store.go:1378] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0801 04:48:07.793222  116165 cacher.go:402] cacher (*policy.PodDisruptionBudget): initialized
I0801 04:48:07.793257  116165 watch_cache.go:521] Replace watchCache (rev: 27601) 
I0801 04:48:07.793358  116165 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.793472  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.793491  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.799328  116165 cacher.go:402] cacher (*policy.PodSecurityPolicy): initialized
I0801 04:48:07.799350  116165 watch_cache.go:521] Replace watchCache (rev: 27601) 
I0801 04:48:07.799779  116165 reflector.go:243] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0801 04:48:07.803881  116165 store.go:1378] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0801 04:48:07.804045  116165 reflector.go:243] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0801 04:48:07.803989  116165 cacher.go:402] cacher (*rbac.Role): initialized
I0801 04:48:07.804141  116165 watch_cache.go:521] Replace watchCache (rev: 27603) 
I0801 04:48:07.805481  116165 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.805728  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.805804  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.807405  116165 store.go:1378] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0801 04:48:07.807644  116165 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.807806  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.807850  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.808091  116165 reflector.go:243] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0801 04:48:07.810043  116165 store.go:1378] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0801 04:48:07.810083  116165 reflector.go:243] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0801 04:48:07.810117  116165 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.810286  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.810318  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.811058  116165 store.go:1378] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0801 04:48:07.811261  116165 reflector.go:243] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0801 04:48:07.815004  116165 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.815243  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.815269  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.816945  116165 store.go:1378] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0801 04:48:07.817020  116165 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.817255  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.817277  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.819122  116165 reflector.go:243] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0801 04:48:07.819773  116165 cacher.go:402] cacher (*rbac.RoleBinding): initialized
I0801 04:48:07.819799  116165 watch_cache.go:521] Replace watchCache (rev: 27605) 
I0801 04:48:07.820151  116165 cacher.go:402] cacher (*rbac.Role): initialized
I0801 04:48:07.820177  116165 watch_cache.go:521] Replace watchCache (rev: 27605) 
I0801 04:48:07.821184  116165 cacher.go:402] cacher (*rbac.ClusterRoleBinding): initialized
I0801 04:48:07.821764  116165 watch_cache.go:521] Replace watchCache (rev: 27605) 
I0801 04:48:07.821529  116165 cacher.go:402] cacher (*rbac.RoleBinding): initialized
I0801 04:48:07.821804  116165 watch_cache.go:521] Replace watchCache (rev: 27605) 
I0801 04:48:07.821685  116165 cacher.go:402] cacher (*rbac.ClusterRole): initialized
I0801 04:48:07.821891  116165 watch_cache.go:521] Replace watchCache (rev: 27605) 
I0801 04:48:07.830319  116165 store.go:1378] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0801 04:48:07.830498  116165 reflector.go:243] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0801 04:48:07.830596  116165 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.830797  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.830840  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.832141  116165 store.go:1378] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0801 04:48:07.832186  116165 master.go:539] Enabling API group "rbac.authorization.k8s.io".
I0801 04:48:07.832194  116165 reflector.go:243] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0801 04:48:07.832147  116165 cacher.go:402] cacher (*rbac.ClusterRole): initialized
I0801 04:48:07.833063  116165 watch_cache.go:521] Replace watchCache (rev: 27605) 
I0801 04:48:07.835096  116165 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.835284  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.835311  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.836213  116165 store.go:1378] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0801 04:48:07.837072  116165 reflector.go:243] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0801 04:48:07.837249  116165 cacher.go:402] cacher (*rbac.ClusterRoleBinding): initialized
I0801 04:48:07.837262  116165 watch_cache.go:521] Replace watchCache (rev: 27605) 
I0801 04:48:07.838133  116165 cacher.go:402] cacher (*scheduling.PriorityClass): initialized
I0801 04:48:07.838154  116165 watch_cache.go:521] Replace watchCache (rev: 27605) 
I0801 04:48:07.845321  116165 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.845502  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.845532  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.847057  116165 store.go:1378] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0801 04:48:07.847087  116165 master.go:539] Enabling API group "scheduling.k8s.io".
I0801 04:48:07.847146  116165 reflector.go:243] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0801 04:48:07.847219  116165 master.go:528] Skipping disabled API group "settings.k8s.io".
I0801 04:48:07.847462  116165 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.847650  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.847685  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.848228  116165 cacher.go:402] cacher (*scheduling.PriorityClass): initialized
I0801 04:48:07.848897  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.849059  116165 store.go:1378] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0801 04:48:07.849164  116165 reflector.go:243] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0801 04:48:07.855977  116165 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.856159  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.856189  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.857604  116165 cacher.go:402] cacher (*storage.StorageClass): initialized
I0801 04:48:07.857630  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.857835  116165 store.go:1378] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0801 04:48:07.857922  116165 reflector.go:243] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0801 04:48:07.858088  116165 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.858263  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.858306  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.859408  116165 cacher.go:402] cacher (*storage.VolumeAttachment): initialized
I0801 04:48:07.859444  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.859455  116165 reflector.go:243] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0801 04:48:07.859426  116165 store.go:1378] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0801 04:48:07.859714  116165 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.859889  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.859911  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.861043  116165 cacher.go:402] cacher (*storage.CSINode): initialized
I0801 04:48:07.861166  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.861709  116165 store.go:1378] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0801 04:48:07.861825  116165 reflector.go:243] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0801 04:48:07.861921  116165 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.862050  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.862075  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.862702  116165 cacher.go:402] cacher (*storage.CSIDriver): initialized
I0801 04:48:07.862720  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.862865  116165 store.go:1378] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0801 04:48:07.863058  116165 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.863196  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.863221  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.863405  116165 reflector.go:243] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0801 04:48:07.865133  116165 cacher.go:402] cacher (*storage.StorageClass): initialized
I0801 04:48:07.865155  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.865588  116165 store.go:1378] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0801 04:48:07.865806  116165 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.865967  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.865985  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.866198  116165 reflector.go:243] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0801 04:48:07.871339  116165 cacher.go:402] cacher (*storage.VolumeAttachment): initialized
I0801 04:48:07.871366  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.872916  116165 store.go:1378] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0801 04:48:07.872979  116165 reflector.go:243] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0801 04:48:07.873354  116165 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.873573  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.873617  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.874172  116165 cacher.go:402] cacher (*storage.CSINode): initialized
I0801 04:48:07.874377  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.874514  116165 store.go:1378] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0801 04:48:07.874605  116165 reflector.go:243] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0801 04:48:07.874612  116165 master.go:539] Enabling API group "storage.k8s.io".
I0801 04:48:07.874730  116165 master.go:528] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I0801 04:48:07.874984  116165 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.875236  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.875263  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.875508  116165 cacher.go:402] cacher (*storage.CSIDriver): initialized
I0801 04:48:07.875538  116165 watch_cache.go:521] Replace watchCache (rev: 27607) 
I0801 04:48:07.876028  116165 store.go:1378] Monitoring deployments.apps count at <storage-prefix>//deployments
I0801 04:48:07.876136  116165 reflector.go:243] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0801 04:48:07.876229  116165 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.877030  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.877058  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.888369  116165 cacher.go:402] cacher (*apps.Deployment): initialized
I0801 04:48:07.889149  116165 watch_cache.go:521] Replace watchCache (rev: 27608) 
I0801 04:48:07.890210  116165 store.go:1378] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0801 04:48:07.890412  116165 reflector.go:243] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0801 04:48:07.890627  116165 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.890801  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.890837  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.891855  116165 store.go:1378] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0801 04:48:07.891895  116165 cacher.go:402] cacher (*apps.StatefulSet): initialized
I0801 04:48:07.891912  116165 watch_cache.go:521] Replace watchCache (rev: 27610) 
I0801 04:48:07.891920  116165 reflector.go:243] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0801 04:48:07.892312  116165 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.899591  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.899664  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.899651  116165 cacher.go:402] cacher (*apps.DaemonSet): initialized
I0801 04:48:07.899686  116165 watch_cache.go:521] Replace watchCache (rev: 27610) 
I0801 04:48:07.901480  116165 store.go:1378] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0801 04:48:07.901940  116165 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.901547  116165 reflector.go:243] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0801 04:48:07.902261  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.902297  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.903470  116165 store.go:1378] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0801 04:48:07.903498  116165 master.go:539] Enabling API group "apps".
I0801 04:48:07.903666  116165 reflector.go:243] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0801 04:48:07.903762  116165 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.903986  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.904017  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.906109  116165 store.go:1378] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0801 04:48:07.906343  116165 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.906564  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.906596  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.906847  116165 reflector.go:243] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0801 04:48:07.907624  116165 store.go:1378] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0801 04:48:07.907862  116165 reflector.go:243] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0801 04:48:07.914332  116165 cacher.go:402] cacher (*admissionregistration.ValidatingWebhookConfiguration): initialized
I0801 04:48:07.907870  116165 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.914530  116165 watch_cache.go:521] Replace watchCache (rev: 27611) 
I0801 04:48:07.914756  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.914783  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.914940  116165 cacher.go:402] cacher (*apps.ControllerRevision): initialized
I0801 04:48:07.915064  116165 watch_cache.go:521] Replace watchCache (rev: 27611) 
I0801 04:48:07.917674  116165 cacher.go:402] cacher (*apps.ReplicaSet): initialized
I0801 04:48:07.917698  116165 watch_cache.go:521] Replace watchCache (rev: 27611) 
I0801 04:48:07.918936  116165 cacher.go:402] cacher (*admissionregistration.MutatingWebhookConfiguration): initialized
I0801 04:48:07.918953  116165 watch_cache.go:521] Replace watchCache (rev: 27611) 
I0801 04:48:07.922224  116165 store.go:1378] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0801 04:48:07.922324  116165 reflector.go:243] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0801 04:48:07.923360  116165 cacher.go:402] cacher (*admissionregistration.ValidatingWebhookConfiguration): initialized
I0801 04:48:07.923382  116165 watch_cache.go:521] Replace watchCache (rev: 27613) 
I0801 04:48:07.923763  116165 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.924354  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.924498  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.933407  116165 store.go:1378] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0801 04:48:07.933445  116165 master.go:539] Enabling API group "admissionregistration.k8s.io".
I0801 04:48:07.933875  116165 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.933507  116165 reflector.go:243] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0801 04:48:07.934246  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.934271  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.935243  116165 store.go:1378] Monitoring events count at <storage-prefix>//events
I0801 04:48:07.935318  116165 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.935672  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:07.935698  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:07.935883  116165 reflector.go:243] Listing and watching *core.Event from storage/cacher.go:/events
I0801 04:48:07.936229  116165 cacher.go:402] cacher (*admissionregistration.MutatingWebhookConfiguration): initialized
I0801 04:48:07.936242  116165 watch_cache.go:521] Replace watchCache (rev: 27615) 
I0801 04:48:07.937744  116165 store.go:1378] Monitoring events count at <storage-prefix>//events
I0801 04:48:07.937795  116165 master.go:539] Enabling API group "events.k8s.io".
I0801 04:48:07.937950  116165 reflector.go:243] Listing and watching *core.Event from storage/cacher.go:/events
I0801 04:48:07.944017  116165 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.944413  116165 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.949914  116165 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.950149  116165 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.950316  116165 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.950459  116165 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.950729  116165 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.950874  116165 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.951055  116165 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.951196  116165 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.952315  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.953450  116165 cacher.go:402] cacher (*core.Event): initialized
I0801 04:48:07.953467  116165 watch_cache.go:521] Replace watchCache (rev: 27615) 
I0801 04:48:07.953450  116165 cacher.go:402] cacher (*core.Event): initialized
I0801 04:48:07.953572  116165 watch_cache.go:521] Replace watchCache (rev: 27615) 
I0801 04:48:07.953246  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.961429  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.961850  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.962919  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.963265  116165 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.970882  116165 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.971200  116165 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.972082  116165 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.972579  116165 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0801 04:48:07.972654  116165 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
I0801 04:48:07.979798  116165 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.980174  116165 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.980555  116165 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.981321  116165 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.987162  116165 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.987576  116165 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.988712  116165 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.989955  116165 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.996375  116165 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0801 04:48:07.996513  116165 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I0801 04:48:07.997502  116165 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:07.997837  116165 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.005418  116165 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.006926  116165 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.007408  116165 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.015512  116165 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.016243  116165 storage_factory.go:285] storing ingressclasses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.018685  116165 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.019053  116165 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.025913  116165 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0801 04:48:08.026036  116165 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0801 04:48:08.026950  116165 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.027338  116165 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.033171  116165 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.034411  116165 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.034883  116165 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.039396  116165 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.040332  116165 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.041393  116165 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.042269  116165 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.047566  116165 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.048381  116165 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0801 04:48:08.048955  116165 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0801 04:48:08.049618  116165 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.054652  116165 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0801 04:48:08.054752  116165 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0801 04:48:08.055455  116165 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.055924  116165 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.060115  116165 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.060959  116165 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.061321  116165 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.061907  116165 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.062419  116165 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.065828  116165 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.066434  116165 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0801 04:48:08.066496  116165 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0801 04:48:08.070520  116165 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.071199  116165 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.071490  116165 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.072217  116165 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.076480  116165 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.077443  116165 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.078225  116165 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.078517  116165 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.078837  116165 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.084780  116165 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.085327  116165 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.085706  116165 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0801 04:48:08.085794  116165 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
W0801 04:48:08.085810  116165 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
I0801 04:48:08.086726  116165 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.093533  116165 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.094377  116165 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.100461  116165 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.102231  116165 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
I0801 04:48:08.108149  116165 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"81aff130-d996-4f36-8888-3e04fcdaec5b", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000, DBMetricPollInterval:30000000000}
W0801 04:48:08.120492  116165 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0801 04:48:08.121120  116165 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0801 04:48:08.121234  116165 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0801 04:48:08.121978  116165 reflector.go:207] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0801 04:48:08.122128  116165 reflector.go:243] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0801 04:48:08.122321  116165 healthz.go:239] healthz check failed: etcd,poststarthook/bootstrap-controller,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/bootstrap-controller failed: not finished
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.122614  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.672809ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39908" resp=0
I0801 04:48:08.124586  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0" latency="1.471362ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39908" resp=200
I0801 04:48:08.126278  116165 get.go:259] "Starting watch" path="/api/v1/namespaces/kube-system/configmaps" resourceVersion="27583" labels="" fields="" timeout="9m51s"
I0801 04:48:08.127615  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="7.83775ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39904" resp=404
I0801 04:48:08.135697  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services" latency="1.283592ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39904" resp=200
I0801 04:48:08.144071  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services" latency="1.809071ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39904" resp=200
I0801 04:48:08.148127  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.148227  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="237.555µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39904" resp=0
I0801 04:48:08.150752  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services" latency="1.113796ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39904" resp=200
I0801 04:48:08.151297  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/services" latency="1.196031ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:08.163552  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="5.85369ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39910" resp=404
I0801 04:48:08.169342  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces" latency="4.02062ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:08.171293  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-public" latency="1.446082ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:08.174166  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces" latency="2.438006ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:08.175692  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-node-lease" latency="1.061158ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:08.179710  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces" latency="3.607145ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:08.223860  116165 shared_informer.go:270] caches populated
I0801 04:48:08.223912  116165 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I0801 04:48:08.225364  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.225594  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.014561ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.249869  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.249983  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="331.189µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.323620  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.323720  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="388.591µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.349820  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.349920  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="297.372µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.423715  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.423831  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="358.553µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.450139  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.450234  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="375.153µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.523732  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.523841  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="437.113µs" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.549830  116165 healthz.go:239] healthz check failed: etcd,poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]etcd failed: etcd client connection not yet established
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.549969  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="330.28µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.555013  116165 client.go:360] parsed scheme: "endpoint"
I0801 04:48:08.555106  116165 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:48:08.625576  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.625699  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.979808ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.655564  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.655781  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="6.0794ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.725233  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.725374  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.962026ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.751779  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.751911  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.351376ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.825192  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.825324  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.8272ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.866479  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.866625  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="8.224652ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.924409  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.924541  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.070284ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:08.950487  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:08.950619  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.062745ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.024770  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:09.024879  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.45831ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.059508  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:09.059644  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="9.711779ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.122223  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="8.170742ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39904" resp=200
I0801 04:48:09.122226  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical" latency="8.181316ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.127638  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="4.927598ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:09.127830  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency="4.894715ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39904" resp=201
I0801 04:48:09.128023  116165 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0801 04:48:09.128703  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes
[-]poststarthook/rbac/bootstrap-roles failed: not finished
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0801 04:48:09.128771  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="5.241638ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40170" resp=0
I0801 04:48:09.131051  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical" latency="2.850908ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.131059  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency="2.953626ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39904" resp=404
I0801 04:48:09.133105  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency="1.508904ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.133598  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/scheduling.k8s.io/v1/priorityclasses" latency="2.115175ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:09.133804  116165 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0801 04:48:09.133822  116165 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0801 04:48:09.134518  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency="1.017387ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.136941  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency="1.226221ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.153292  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency="3.769698ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.166535  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.166691  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="14.511352ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.169527  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency="15.697371ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.177519  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency="6.811915ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.182025  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin" latency="3.970881ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.190676  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="8.082783ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.190928  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0801 04:48:09.193812  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery" latency="2.585103ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.197065  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.402401ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.197304  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0801 04:48:09.198575  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user" latency="980.404µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.215785  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="10.50456ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.216103  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0801 04:48:09.218620  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer" latency="1.475991ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.221892  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.431426ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.222247  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0801 04:48:09.223637  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/admin" latency="1.092978ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.225074  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.225170  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.813254ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.227560  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.412689ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.228046  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/admin
I0801 04:48:09.234897  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/edit" latency="5.619334ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.251957  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="16.409539ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.252196  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/edit
I0801 04:48:09.254871  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.255015  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.611945ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.255395  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/view" latency="2.79344ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.258876  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.731968ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.259097  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/view
I0801 04:48:09.261570  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin" latency="1.457942ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.269832  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="7.811603ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.270073  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0801 04:48:09.277197  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit" latency="6.8337ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.281013  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.81796ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.287270  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0801 04:48:09.289282  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view" latency="1.702791ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.306021  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="10.259775ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.306597  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0801 04:48:09.310246  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster" latency="3.318556ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.314110  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.298811ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.314396  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0801 04:48:09.319639  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node" latency="4.970022ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.327621  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="6.401431ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.327915  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node
I0801 04:48:09.338206  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector" latency="10.052189ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.338380  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.338430  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="12.382273ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.344699  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="5.714841ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.351619  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0801 04:48:09.353689  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.353690  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin" latency="1.796058ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.353773  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.552267ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.367703  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="10.944698ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.368040  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0801 04:48:09.370917  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper" latency="1.870201ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.383647  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="12.281506ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.385114  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0801 04:48:09.389117  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator" latency="2.468036ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.393315  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.115132ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.393548  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0801 04:48:09.394895  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator" latency="1.110194ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.398712  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.018281ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.398960  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0801 04:48:09.401741  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager" latency="2.366629ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.408031  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="5.141642ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.409135  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0801 04:48:09.417977  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns" latency="8.368061ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.421928  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.288567ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.422367  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0801 04:48:09.434832  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner" latency="11.847405ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.435510  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.435603  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.124424ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.453031  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="17.5343ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.454924  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0801 04:48:09.455647  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.455735  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="5.345901ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.457086  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient" latency="1.925106ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.459576  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.994333ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.460921  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0801 04:48:09.463526  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" latency="1.632592ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.467044  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.04127ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.467387  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0801 04:48:09.468995  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler" latency="1.403439ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.478352  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="8.854887ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.478674  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0801 04:48:09.481304  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:legacy-unknown-approver" latency="2.253729ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.484117  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.195245ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.485474  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:legacy-unknown-approver
I0801 04:48:09.495852  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kubelet-serving-approver" latency="10.123932ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.499355  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.83332ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.499653  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kubelet-serving-approver
I0801 04:48:09.502047  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-approver" latency="2.171457ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.511175  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="8.122311ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.511510  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-approver
I0801 04:48:09.519898  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver" latency="8.020947ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.530490  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="9.158479ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.531144  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:kube-apiserver-client-kubelet-approver
I0801 04:48:09.533610  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier" latency="2.217282ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.533906  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.534183  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="4.063881ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.540067  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="5.935736ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.541182  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0801 04:48:09.543200  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler" latency="1.731994ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.547488  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.603051ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.547759  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0801 04:48:09.555256  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.555333  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="5.772421ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=0
I0801 04:48:09.555935  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller" latency="1.00128ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.561150  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="4.026289ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:09.561433  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0801 04:48:09.562517  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller" latency="871.078µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.570791  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="7.665651ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:09.571057  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0801 04:48:09.573308  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller" latency="1.988444ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.581578  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="7.306965ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:09.581952  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0801 04:48:09.586159  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller" latency="3.51437ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.592041  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="5.272291ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:09.593049  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0801 04:48:09.594329  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller" latency="982.729µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.596207  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.400633ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:09.597245  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0801 04:48:09.598418  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller" latency="951.204µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.602469  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.642334ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:09.602710  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0801 04:48:09.606206  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller" latency="3.142784ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.615492  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="8.617934ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:09.615837  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0801 04:48:09.633609  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.633728  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller" latency="17.633132ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:09.633729  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="10.248162ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40170" resp=0
I0801 04:48:09.643286  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.312938ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.643667  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0801 04:48:09.646073  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslicemirroring-controller" latency="2.080736ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.649678  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.984067ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.649988  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller
I0801 04:48:09.650581  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.650678  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.00007ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.651364  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller" latency="932.032µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.655123  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.300826ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.655772  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0801 04:48:09.657918  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector" latency="1.836469ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.668011  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="9.319367ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.668307  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0801 04:48:09.670504  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler" latency="1.103607ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.674894  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.756658ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.675237  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0801 04:48:09.677632  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller" latency="2.088598ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.679869  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.695734ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.680119  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0801 04:48:09.684142  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller" latency="2.447077ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.687305  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.870573ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.687571  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0801 04:48:09.689301  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller" latency="1.480925ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.691093  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.327843ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.691447  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0801 04:48:09.695269  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder" latency="3.552619ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.699602  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.630813ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.699867  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0801 04:48:09.702465  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector" latency="2.34635ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.707084  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="4.096837ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.713623  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0801 04:48:09.718011  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller" latency="4.018827ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.752841  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.752986  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="26.356759ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.753156  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="34.596905ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.753582  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0801 04:48:09.754896  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller" latency="997.1µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=404
I0801 04:48:09.755673  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.755903  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.660415ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:09.764358  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="8.786527ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40170" resp=201
I0801 04:48:09.765457  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0801 04:48:09.766879  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller" latency="1.130432ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.769955  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="2.380224ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.770256  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0801 04:48:09.772156  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller" latency="1.65148ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.781509  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="7.784802ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.782716  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0801 04:48:09.786912  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller" latency="3.843074ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.799036  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.934412ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.806121  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0801 04:48:09.813078  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller" latency="6.52614ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.815931  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="1.889416ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.816931  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0801 04:48:09.820211  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller" latency="2.949296ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.825184  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.334419ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.825853  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0801 04:48:09.826582  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.826684  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="3.476909ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.828224  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller" latency="1.444257ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.833133  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.492925ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.833709  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0801 04:48:09.835613  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller" latency="1.574912ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.839347  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="3.137295ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.839801  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0801 04:48:09.857936  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller" latency="17.754532ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.864528  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="5.870882ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.865052  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0801 04:48:09.889650  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller" latency="24.288861ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.891973  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.892076  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="31.718457ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.897465  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles" latency="7.151044ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.897782  116165 storage_rbac.go:220] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0801 04:48:09.921818  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin" latency="23.72375ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.927311  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="4.852617ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.927639  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0801 04:48:09.934227  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.934351  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="10.848047ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.935522  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery" latency="7.616546ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.939246  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.056307ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.939514  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0801 04:48:09.946051  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user" latency="6.215772ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.951098  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="4.441919ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.951362  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0801 04:48:09.956454  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer" latency="4.732983ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.958372  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:09.958480  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="7.949333ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:09.971467  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="14.53119ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.971802  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0801 04:48:09.975589  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier" latency="1.619439ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.979418  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.271393ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.981554  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0801 04:48:09.986913  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager" latency="5.092914ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.991329  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.79993ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.991665  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0801 04:48:09.994057  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns" latency="2.081777ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:09.998279  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.630449ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:09.998519  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0801 04:48:10.009483  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler" latency="4.173295ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.014207  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="4.025309ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.014545  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0801 04:48:10.016456  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler" latency="1.485058ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.020277  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.894478ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.021561  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0801 04:48:10.029167  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node" latency="1.463477ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.030037  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.030124  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="6.831945ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.032544  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.568344ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:10.032804  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0801 04:48:10.035278  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller" latency="2.267016ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.044028  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.203381ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:10.044355  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0801 04:48:10.058985  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.059624  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="8.975901ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.061648  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller" latency="15.923812ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.078174  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="16.008706ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:10.078460  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0801 04:48:10.080028  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller" latency="1.306534ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.097230  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.94399ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:10.097585  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0801 04:48:10.119128  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller" latency="4.1503ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.125342  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.125469  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.808203ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:10.137487  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.067464ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:10.137843  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0801 04:48:10.150751  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.150865  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.15045ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:10.155396  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller" latency="1.241518ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.181319  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="6.798001ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:10.181702  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0801 04:48:10.197715  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller" latency="3.530438ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.221200  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="6.977523ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:10.221538  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0801 04:48:10.225396  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.225506  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.103936ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:10.235876  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller" latency="1.258753ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.254592  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.254709  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="5.170278ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:10.256013  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.881766ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.256942  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0801 04:48:10.281431  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller" latency="6.802507ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.296245  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.12682ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.297761  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I0801 04:48:10.317247  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslicemirroring-controller" latency="2.716782ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.325300  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.325420  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.896674ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.342061  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="6.933044ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.342357  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslicemirroring-controller
I0801 04:48:10.355822  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller" latency="1.453064ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:10.355872  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.356072  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="4.279123ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.377197  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.975882ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.377557  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0801 04:48:10.401752  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector" latency="2.733415ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.417644  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.17641ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.418010  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0801 04:48:10.428211  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.429025  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="3.613164ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.435828  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler" latency="1.469399ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.450552  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.450689  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.136246ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.457226  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.131803ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.457543  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0801 04:48:10.475859  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller" latency="1.288123ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.496865  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.088195ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.497253  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0801 04:48:10.515426  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller" latency="1.269407ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.524219  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.524839  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.551222ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.536925  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.669802ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.537202  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0801 04:48:10.550590  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.550699  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.067897ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.555181  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller" latency="1.044353ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.576153  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.955895ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.577021  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0801 04:48:10.605508  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder" latency="11.27239ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.617261  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.929611ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.617538  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0801 04:48:10.625097  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.625232  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.697823ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.636204  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector" latency="1.992027ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.650719  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.650843  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.197503ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.656177  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.005512ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.657044  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0801 04:48:10.675497  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller" latency="1.292042ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.696224  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.011093ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.697103  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0801 04:48:10.715837  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller" latency="1.668904ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.725225  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.725377  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.949288ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.736206  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.04189ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.739988  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0801 04:48:10.750744  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.750940  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.334102ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.755411  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller" latency="1.219738ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.776345  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.070201ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.776773  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0801 04:48:10.795652  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller" latency="1.456935ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.817184  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.976255ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.817493  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0801 04:48:10.824414  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.826990  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="3.685296ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.835477  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller" latency="1.059282ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.852954  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.853072  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="3.511639ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.857010  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.51989ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.857264  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0801 04:48:10.875259  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller" latency="1.100972ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.897844  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.935747ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.898190  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0801 04:48:10.918376  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller" latency="1.186056ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.924180  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.924950  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.662832ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.937097  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.864568ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.937467  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0801 04:48:10.950651  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:10.950760  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.133323ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:10.955214  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller" latency="962.455µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:10.978048  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.799915ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:10.978461  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0801 04:48:10.996722  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller" latency="1.488555ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.018264  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="3.985651ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.018513  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0801 04:48:11.024857  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.024952  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.620298ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:11.038047  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller" latency="3.748667ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.050757  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.050865  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.335558ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:11.055883  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="1.632529ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.056200  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0801 04:48:11.075356  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller" latency="1.128571ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.096270  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings" latency="2.006134ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.096533  116165 storage_rbac.go:248] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0801 04:48:11.115387  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader" latency="1.099877ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.124193  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.124924  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.521817ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:11.124974  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="8.881286ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.135946  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="1.852046ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.136172  116165 storage_rbac.go:279] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0801 04:48:11.152886  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.153039  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="2.397604ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:11.155358  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer" latency="1.115004ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:11.157745  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.91715ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.176843  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="2.531724ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.177185  116165 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0801 04:48:11.196082  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider" latency="1.673517ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:11.198365  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.17014ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.215860  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="1.648566ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.216149  116165 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0801 04:48:11.227971  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.228093  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="4.789089ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:11.235155  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner" latency="1.045521ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:11.237557  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.859722ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.250733  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.250827  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.188859ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:11.256856  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="2.086961ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.257160  116165 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0801 04:48:11.279216  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager" latency="1.310578ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:11.287906  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="5.342967ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.295830  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="1.703969ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.296126  116165 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0801 04:48:11.316036  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler" latency="1.845073ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:11.318646  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.656388ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.324121  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.324221  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.020463ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:11.349607  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles" latency="5.014472ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.349914  116165 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0801 04:48:11.350799  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.350886  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.468748ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:11.355240  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer" latency="909.6µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.357539  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-public" latency="1.815423ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.375837  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles" latency="1.665695ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.376149  116165 storage_rbac.go:279] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0801 04:48:11.395191  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader" latency="991.505µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.401737  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="6.033903ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.416083  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="1.959308ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.416984  116165 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0801 04:48:11.427935  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.428040  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="4.735406ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:11.435989  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager" latency="1.829384ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.438596  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.336014ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.450592  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.450714  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.11033ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:11.456129  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.054345ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.457097  116165 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0801 04:48:11.475481  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler" latency="1.25836ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.478095  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="2.050825ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.501354  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="7.100377ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.501662  116165 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0801 04:48:11.517089  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer" latency="1.721957ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.521494  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="3.123262ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.541838  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.542046  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="18.799307ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=0
I0801 04:48:11.545153  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="10.946255ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.545621  116165 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0801 04:48:11.550799  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.550908  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.216702ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:11.555319  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider" latency="1.148519ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:11.557781  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="1.823812ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.576238  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.215894ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.577359  116165 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0801 04:48:11.595420  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner" latency="1.349952ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:11.598291  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system" latency="2.26545ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.616239  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings" latency="2.059157ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=201
I0801 04:48:11.617556  116165 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0801 04:48:11.626612  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.626793  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="3.078808ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:11.640848  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer" latency="1.847751ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=404
I0801 04:48:11.642758  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-public" latency="1.459739ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.654370  116165 healthz.go:239] healthz check failed: poststarthook/rbac/bootstrap-roles
[-]poststarthook/rbac/bootstrap-roles failed: not finished
I0801 04:48:11.654482  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="4.351734ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:11.657912  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings" latency="3.766056ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.658322  116165 storage_rbac.go:309] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0801 04:48:11.725081  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.734015ms" userAgent="Go-http-client/1.1" srcIP="127.0.0.1:40568" resp=200
W0801 04:48:11.726357  116165 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0801 04:48:11.726401  116165 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0801 04:48:11.726413  116165 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0801 04:48:11.735779  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments" latency="8.453237ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.736226  116165 replica_set.go:182] Starting replicaset controller
I0801 04:48:11.736828  116165 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0801 04:48:11.736901  116165 reflector.go:207] Starting reflector *v1.Deployment (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0801 04:48:11.736922  116165 reflector.go:243] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:134
I0801 04:48:11.736993  116165 reflector.go:207] Starting reflector *v1.ReplicaSet (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0801 04:48:11.737009  116165 reflector.go:243] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0801 04:48:11.737034  116165 reflector.go:207] Starting reflector *v1.Pod (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0801 04:48:11.737050  116165 reflector.go:243] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:134
I0801 04:48:11.737866  116165 deployment_controller.go:153] Starting deployment controller
I0801 04:48:11.737886  116165 shared_informer.go:240] Waiting for caches to sync for deployment
I0801 04:48:11.737900  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/deployments?limit=500&resourceVersion=0" latency="438.397µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.738358  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/replicasets?limit=500&resourceVersion=0" latency="327.606µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers" srcIP="127.0.0.1:40604" resp=200
I0801 04:48:11.738473  116165 deployment_controller.go:169] Adding deployment deployment
I0801 04:48:11.738794  116165 get.go:259] "Starting watch" path="/apis/apps/v1/deployments" resourceVersion="28124" labels="" fields="" timeout="7m6s"
I0801 04:48:11.738818  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/pods?limit=500&resourceVersion=0" latency="308.736µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers" srcIP="127.0.0.1:39918" resp=200
I0801 04:48:11.739008  116165 get.go:259] "Starting watch" path="/apis/apps/v1/replicasets" resourceVersion="27611" labels="" fields="" timeout="8m9s"
I0801 04:48:11.739463  116165 get.go:259] "Starting watch" path="/api/v1/pods" resourceVersion="27583" labels="" fields="" timeout="8m43s"
I0801 04:48:11.744137  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="6.614411ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.750904  116165 httplog.go:89] "HTTP" verb="GET" URI="/healthz" latency="1.357393ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.752900  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default" latency="974.088µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.758400  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces" latency="5.059631ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.759786  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default/services/kubernetes" latency="968.515µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.769401  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/default/services" latency="8.106868ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.770912  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="879.965µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
I0801 04:48:11.773910  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/default/endpoints" latency="1.866218ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.775277  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency="873.025µs" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=404
W0801 04:48:11.775560  116165 warnings.go:67] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.22+, unavailable in v1.25+
I0801 04:48:11.778044  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices" latency="2.099455ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=201
W0801 04:48:11.778247  116165 warnings.go:67] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.22+, unavailable in v1.25+
I0801 04:48:11.837118  116165 shared_informer.go:270] caches populated
I0801 04:48:11.837160  116165 shared_informer.go:247] Caches are synced for ReplicaSet 
I0801 04:48:11.837994  116165 shared_informer.go:270] caches populated
I0801 04:48:11.838024  116165 shared_informer.go:247] Caches are synced for deployment 
I0801 04:48:11.838200  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:11.838191901 +0000 UTC m=+165.379341921)
I0801 04:48:11.838551  116165 deployment_util.go:261] Updating replica set "deployment-b58dbf467" revision to 1
I0801 04:48:11.847749  116165 httplog.go:89] "HTTP" verb="POST" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets" latency="8.770723ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.852415  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set deployment-b58dbf467 to 10"
I0801 04:48:11.853235  116165 replica_set.go:286] Adding ReplicaSet test-deployment-available-condition/deployment-b58dbf467
I0801 04:48:11.853360  116165 controller_utils.go:203] Controller test-deployment-available-condition/deployment-b58dbf467 either never recorded expectations, or the ttl expired.
I0801 04:48:11.853377  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="6.543574ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40602" resp=200
I0801 04:48:11.853405  116165 controller_utils.go:220] Setting expectations &controller.ControlleeExpectations{add:10, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:11.853477  116165 replica_set.go:559] "Too few replicas" replicaSet="test-deployment-available-condition/deployment-b58dbf467" need=10 creating=10
I0801 04:48:11.853257  116165 deployment_controller.go:215] ReplicaSet deployment-b58dbf467 added.
I0801 04:48:11.857230  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="4.006668ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40608" resp=201
I0801 04:48:11.857392  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="3.556674ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:11.857607  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="3.789598ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40602" resp=201
I0801 04:48:11.857667  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:11.852036819 +0000 UTC m=+165.393186846 - now: 2020-08-01 04:48:11.85766083 +0000 UTC m=+165.398810853]
I0801 04:48:11.857875  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-fwgcl
I0801 04:48:11.857621  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:11.857838  116165 replica_set.go:376] Pod deployment-b58dbf467-fwgcl created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-fwgcl", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-fwgcl", UID:"d340da1e-fde3-4448-80c3-198a87a446c6", ResourceVersion:"28139", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854091, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc0315a1bda), BlockOwnerDeletion:(*bool)(0xc0315a1bdb)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0315a1c50), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0315d89a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0315a1c58), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:11.858026  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:9, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:11.858152  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-fwgcl"
I0801 04:48:11.863579  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="5.467577ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40608" resp=409
I0801 04:48:11.863903  116165 replica_set.go:376] Pod deployment-b58dbf467-2bh87 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-2bh87", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-2bh87", UID:"6f4ea523-0b6d-4333-958a-445f50a9a8ca", ResourceVersion:"28140", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854091, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc03183302a), BlockOwnerDeletion:(*bool)(0xc03183302b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0318330a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc03181efc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0318330a8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:11.864072  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:8, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:11.864167  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="5.569085ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40612" resp=201
I0801 04:48:11.864930  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="5.547698ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40610" resp=201
I0801 04:48:11.864973  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (26.772218ms)
I0801 04:48:11.865030  116165 deployment_controller.go:490] "Error syncing deployment" deployment="test-deployment-available-condition/deployment" err="Operation cannot be fulfilled on deployments.apps \"deployment\": the object has been modified; please apply your changes to the latest version and try again"
I0801 04:48:11.865068  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:11.865062625 +0000 UTC m=+165.406212631)
I0801 04:48:11.865099  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="6.879846ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.865432  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:11 +0000 UTC - now: 2020-08-01 04:48:11.865426945 +0000 UTC m=+165.406576952]
I0801 04:48:11.865705  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-smdpf
I0801 04:48:11.865670  116165 replica_set.go:376] Pod deployment-b58dbf467-smdpf created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-smdpf", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-smdpf", UID:"1c74e88d-4fdb-4e8c-b5e0-20fc9cf082d3", ResourceVersion:"28142", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854091, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc0314e62ba), BlockOwnerDeletion:(*bool)(0xc0314e62bb)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0314e6330), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0314c2700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0314e6338), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:11.865786  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:7, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:11.865924  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-smdpf"
I0801 04:48:11.866018  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-2bh87
I0801 04:48:11.866235  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-2bh87"
I0801 04:48:11.873812  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="7.819865ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40608" resp=201
I0801 04:48:11.877508  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="6.136104ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:11.877765  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-jtx8q
I0801 04:48:11.877832  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="6.137702ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40614" resp=201
I0801 04:48:11.877852  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-jtx8q"
I0801 04:48:11.878128  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-pmjdc
I0801 04:48:11.878179  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-pmjdc"
I0801 04:48:11.878491  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="6.633832ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40616" resp=201
I0801 04:48:11.878720  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-4qxw4
I0801 04:48:11.878773  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-4qxw4"
I0801 04:48:11.878011  116165 replica_set.go:376] Pod deployment-b58dbf467-jtx8q created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-jtx8q", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-jtx8q", UID:"1402de22-9f36-4d47-af3a-c2f3f6a04ff9", ResourceVersion:"28145", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854091, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc03167da1a), BlockOwnerDeletion:(*bool)(0xc03167da1b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc03167da90), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc031667730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc03167da98), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:11.878973  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:6, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:11.879038  116165 replica_set.go:376] Pod deployment-b58dbf467-pmjdc created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-pmjdc", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-pmjdc", UID:"68fcf779-57c0-4f1f-8458-1124e2190927", ResourceVersion:"28146", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854091, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc03167dcfa), BlockOwnerDeletion:(*bool)(0xc03167dcfb)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc03167dd70), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0316677a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc03167dd78), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:11.879340  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:5, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:11.879386  116165 replica_set.go:376] Pod deployment-b58dbf467-4qxw4 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-4qxw4", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-4qxw4", UID:"109fec9f-7ab9-4af0-b87e-ec2a6745b02f", ResourceVersion:"28147", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854091, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc0319248aa), BlockOwnerDeletion:(*bool)(0xc0319248ab)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc031924920), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0315d8f50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc031924928), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:11.879524  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:4, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:11.881452  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="5.718063ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40608" resp=201
I0801 04:48:11.882065  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="9.76264ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40618" resp=201
I0801 04:48:11.882075  116165 replica_set.go:376] Pod deployment-b58dbf467-r965p created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-r965p", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-r965p", UID:"d33beb82-0f58-42da-8d5c-38116fceaf14", ResourceVersion:"28149", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854091, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc0313e487a), BlockOwnerDeletion:(*bool)(0xc0313e487b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0313e48f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0313c2fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0313e48f8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:11.882177  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:3, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:11.882365  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-r965p
I0801 04:48:11.882620  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-r965p"
I0801 04:48:11.891165  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="25.414484ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40610" resp=200
I0801 04:48:11.891542  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (26.473202ms)
I0801 04:48:11.891591  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:11.891586855 +0000 UTC m=+165.432736862)
I0801 04:48:11.891994  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:11 +0000 UTC - now: 2020-08-01 04:48:11.891987529 +0000 UTC m=+165.433137536]
I0801 04:48:11.892113  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:11.898075  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="5.136289ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40614" resp=409
I0801 04:48:11.898408  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (6.812227ms)
I0801 04:48:11.898469  116165 deployment_controller.go:490] "Error syncing deployment" deployment="test-deployment-available-condition/deployment" err="Operation cannot be fulfilled on deployments.apps \"deployment\": the object has been modified; please apply your changes to the latest version and try again"
I0801 04:48:11.898517  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:11.898511682 +0000 UTC m=+165.439661687)
I0801 04:48:11.898966  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:11 +0000 UTC - now: 2020-08-01 04:48:11.898959263 +0000 UTC m=+165.440109276]
I0801 04:48:11.899025  116165 progress.go:195] Queueing up deployment "deployment" for a progress check after 7199s
I0801 04:48:11.899050  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (535.118µs)
I0801 04:48:11.903592  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:11.903563058 +0000 UTC m=+165.444713083)
I0801 04:48:11.904116  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:11 +0000 UTC - now: 2020-08-01 04:48:11.904109387 +0000 UTC m=+165.445259397]
I0801 04:48:11.904175  116165 progress.go:195] Queueing up deployment "deployment" for a progress check after 7199s
I0801 04:48:11.904192  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (626.349µs)
I0801 04:48:11.948951  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.737857ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.047542  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="2.143054ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.053844  116165 request.go:581] Throttling request took 171.88944ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/events
I0801 04:48:12.058565  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="4.040625ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40614" resp=201
I0801 04:48:12.147465  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.870131ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.247216  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.870094ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.253804  116165 request.go:581] Throttling request took 371.236477ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/pods
I0801 04:48:12.262020  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="7.926609ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40614" resp=201
I0801 04:48:12.262047  116165 replica_set.go:376] Pod deployment-b58dbf467-7fcmx created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-7fcmx", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-7fcmx", UID:"63a4917d-87f6-4181-a593-543c90ce63c7", ResourceVersion:"28174", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854092, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc0313e4f2a), BlockOwnerDeletion:(*bool)(0xc0313e4f2b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0313e4fa0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0313c3180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0313e4fa8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:12.262184  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:2, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.262392  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-7fcmx
I0801 04:48:12.262453  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-7fcmx"
I0801 04:48:12.348037  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="2.653748ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.447268  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.956218ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.453814  116165 request.go:581] Throttling request took 571.196256ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/pods
I0801 04:48:12.456962  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="2.804568ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40614" resp=201
I0801 04:48:12.457283  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-9cz6w
I0801 04:48:12.457359  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-9cz6w"
I0801 04:48:12.458335  116165 replica_set.go:376] Pod deployment-b58dbf467-9cz6w created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-9cz6w", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-9cz6w", UID:"f338c8bb-e484-45c9-827d-074e4d986cfb", ResourceVersion:"28183", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854092, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc031b9234a), BlockOwnerDeletion:(*bool)(0xc031b9234b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc031b923c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0317c3b20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc031b923c8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:12.458489  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.546735  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.427937ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.647517  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.970447ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.653704  116165 request.go:581] Throttling request took 770.109007ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/pods
I0801 04:48:12.658636  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/pods" latency="3.606412ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40614" resp=201
I0801 04:48:12.658948  116165 controller_utils.go:593] Controller deployment-b58dbf467 created pod deployment-b58dbf467-jzzt4
I0801 04:48:12.659025  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 0->0 (need 10), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0801 04:48:12.658898  116165 replica_set.go:376] Pod deployment-b58dbf467-jzzt4 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-b58dbf467-jzzt4", GenerateName:"deployment-b58dbf467-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-jzzt4", UID:"01362f90-ef8a-43c8-a0bf-52dd30fa9e0d", ResourceVersion:"28189", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731854092, loc:(*time.Location)(0x6df5920)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"b58dbf467"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-b58dbf467", UID:"108a0cc5-fc0b-42c0-a991-349f5be0053e", Controller:(*bool)(0xc03199584a), BlockOwnerDeletion:(*bool)(0xc03199584b)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0319958c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0319b71f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0319958c8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0801 04:48:12.659076  116165 controller_utils.go:237] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.659050  116165 event.go:291] "Event occurred" object="test-deployment-available-condition/deployment-b58dbf467" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: deployment-b58dbf467-jzzt4"
I0801 04:48:12.668488  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="2.421693ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.668757  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:12.668802  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (815.450306ms)
I0801 04:48:12.668802  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:12.668783916 +0000 UTC m=+166.209933915)
I0801 04:48:12.668842  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.668934  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 0->10 (need 10), fullyLabeledReplicas 0->10, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:12.669101  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:11 +0000 UTC - now: 2020-08-01 04:48:12.669097414 +0000 UTC m=+166.210247413]
I0801 04:48:12.669126  116165 progress.go:195] Queueing up deployment "deployment" for a progress check after 7198s
I0801 04:48:12.669137  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (349.808µs)
I0801 04:48:12.675723  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="6.4312ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.676062  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (7.222304ms)
I0801 04:48:12.677448  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.677599  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (161.684µs)
I0801 04:48:12.677647  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:12.677680  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:12.677666417 +0000 UTC m=+166.218816429)
I0801 04:48:12.695487  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="9.643511ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.695947  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:12.695964  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (18.292932ms)
I0801 04:48:12.696108  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:12.696101817 +0000 UTC m=+166.237251846)
I0801 04:48:12.697188  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:12 +0000 UTC - now: 2020-08-01 04:48:12.697177823 +0000 UTC m=+166.238327826]
I0801 04:48:12.697259  116165 progress.go:195] Queueing up deployment "deployment" for a progress check after 7199s
I0801 04:48:12.697292  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (1.186836ms)
I0801 04:48:12.748922  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="2.039499ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.751353  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.739414ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.753839  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.823674ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.763397  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/test-deployment-available-condition/pods?labelSelector=name%3Dtest" latency="8.839255ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.767661  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="3.348685ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.853827  116165 request.go:581] Throttling request took 794.847774ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/events
I0801 04:48:12.857647  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="3.48992ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40614" resp=201
I0801 04:48:12.927177  116165 request.go:581] Throttling request took 158.954396ms, request: GET:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets?labelSelector=name%3Dtest
I0801 04:48:12.931074  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets?labelSelector=name%3Dtest" latency="3.476455ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.935249  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-2bh87/status" latency="2.853385ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.936011  116165 replica_set.go:439] Pod deployment-b58dbf467-2bh87 updated, objectMeta {Name:deployment-b58dbf467-2bh87 GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-2bh87 UID:6f4ea523-0b6d-4333-958a-445f50a9a8ca ResourceVersion:28140 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc03183302a BlockOwnerDeletion:0xc03183302b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-2bh87 GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-2bh87 UID:6f4ea523-0b6d-4333-958a-445f50a9a8ca ResourceVersion:28208 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031d8255a BlockOwnerDeletion:0xc031d8255b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:12.936152  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:12.936242  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.937032  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 0->1, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:12.939525  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-4qxw4/status" latency="1.812615ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.940192  116165 replica_set.go:439] Pod deployment-b58dbf467-4qxw4 updated, objectMeta {Name:deployment-b58dbf467-4qxw4 GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-4qxw4 UID:109fec9f-7ab9-4af0-b87e-ec2a6745b02f ResourceVersion:28147 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc0319248aa BlockOwnerDeletion:0xc0319248ab}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-4qxw4 GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-4qxw4 UID:109fec9f-7ab9-4af0-b87e-ec2a6745b02f ResourceVersion:28209 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031bd4b9a BlockOwnerDeletion:0xc031bd4b9b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:12.940961  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:12.943884  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="5.091975ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:12.944171  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:12.944222  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:12.944206986 +0000 UTC m=+166.485356996)
I0801 04:48:12.944172  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (7.94311ms)
I0801 04:48:12.945179  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.945305  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 1->2, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:12.946053  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-7fcmx/status" latency="3.676875ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.947191  116165 replica_set.go:439] Pod deployment-b58dbf467-7fcmx updated, objectMeta {Name:deployment-b58dbf467-7fcmx GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-7fcmx UID:63a4917d-87f6-4181-a593-543c90ce63c7 ResourceVersion:28174 Generation:0 CreationTimestamp:2020-08-01 04:48:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc0313e4f2a BlockOwnerDeletion:0xc0313e4f2b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-7fcmx GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-7fcmx UID:63a4917d-87f6-4181-a593-543c90ce63c7 ResourceVersion:28211 Generation:0 CreationTimestamp:2020-08-01 04:48:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc0318cf11a BlockOwnerDeletion:0xc0318cf11b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:12.947284  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:12.948197  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="2.716004ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:12.948828  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="2.355555ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:12.949241  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (4.083179ms)
I0801 04:48:12.949286  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:12.949295  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.949313  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:12.949319  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (5.106177ms)
I0801 04:48:12.949350  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:12.949345867 +0000 UTC m=+166.490495889)
I0801 04:48:12.949378  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 2->3, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:12.950604  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-9cz6w/status" latency="4.113917ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.950630  116165 replica_set.go:439] Pod deployment-b58dbf467-9cz6w updated, objectMeta {Name:deployment-b58dbf467-9cz6w GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-9cz6w UID:f338c8bb-e484-45c9-827d-074e4d986cfb ResourceVersion:28183 Generation:0 CreationTimestamp:2020-08-01 04:48:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031b9234a BlockOwnerDeletion:0xc031b9234b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-9cz6w GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-9cz6w UID:f338c8bb-e484-45c9-827d-074e4d986cfb ResourceVersion:28214 Generation:0 CreationTimestamp:2020-08-01 04:48:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031bd549a BlockOwnerDeletion:0xc031bd549b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:12.951039  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:12.954277  116165 replica_set.go:439] Pod deployment-b58dbf467-fwgcl updated, objectMeta {Name:deployment-b58dbf467-fwgcl GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-fwgcl UID:d340da1e-fde3-4448-80c3-198a87a446c6 ResourceVersion:28139 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc0315a1bda BlockOwnerDeletion:0xc0315a1bdb}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-fwgcl GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-fwgcl UID:d340da1e-fde3-4448-80c3-198a87a446c6 ResourceVersion:28216 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031ca834a BlockOwnerDeletion:0xc031ca834b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:12.954382  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:12.954729  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="4.545766ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:12.954953  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (5.661748ms)
I0801 04:48:12.954982  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.955066  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 2->5, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:12.959215  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:12.960302  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-fwgcl/status" latency="9.018198ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.963624  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="4.743917ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=409
I0801 04:48:12.964137  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="3.579682ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:12.965409  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (16.056496ms)
I0801 04:48:12.965460  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:12.965454344 +0000 UTC m=+166.506604359)
I0801 04:48:12.965978  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:12.966586  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-jtx8q/status" latency="5.303495ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.967096  116165 replica_set.go:439] Pod deployment-b58dbf467-jtx8q updated, objectMeta {Name:deployment-b58dbf467-jtx8q GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-jtx8q UID:1402de22-9f36-4d47-af3a-c2f3f6a04ff9 ResourceVersion:28145 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc03167da1a BlockOwnerDeletion:0xc03167da1b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-jtx8q GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-jtx8q UID:1402de22-9f36-4d47-af3a-c2f3f6a04ff9 ResourceVersion:28218 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031c3cd9a BlockOwnerDeletion:0xc031c3cd9b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:12.967219  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:12.967405  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467" latency="3.164054ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:12.967684  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 3->5, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:12.968057  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="1.8747ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40806" resp=409
I0801 04:48:12.968303  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (2.842008ms)
I0801 04:48:12.968351  116165 deployment_controller.go:490] "Error syncing deployment" deployment="test-deployment-available-condition/deployment" err="Operation cannot be fulfilled on deployments.apps \"deployment\": the object has been modified; please apply your changes to the latest version and try again"
I0801 04:48:12.968379  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:12.968374411 +0000 UTC m=+166.509524421)
I0801 04:48:12.970356  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:12.971293  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="3.292695ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:12.971557  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (16.578263ms)
I0801 04:48:12.971599  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.971694  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 5->6, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:12.979822  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-jzzt4/status" latency="12.48512ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.981205  116165 replica_set.go:439] Pod deployment-b58dbf467-jzzt4 updated, objectMeta {Name:deployment-b58dbf467-jzzt4 GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-jzzt4 UID:01362f90-ef8a-43c8-a0bf-52dd30fa9e0d ResourceVersion:28189 Generation:0 CreationTimestamp:2020-08-01 04:48:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc03199584a BlockOwnerDeletion:0xc03199584b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-jzzt4 GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-jzzt4 UID:01362f90-ef8a-43c8-a0bf-52dd30fa9e0d ResourceVersion:28220 Generation:0 CreationTimestamp:2020-08-01 04:48:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031d82fba BlockOwnerDeletion:0xc031d82fbb}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:12.981345  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:12.982435  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="12.667923ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:12.982681  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:12.982871  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (14.489648ms)
I0801 04:48:12.982948  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:12.982943436 +0000 UTC m=+166.524093456)
I0801 04:48:12.991150  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:12.991557  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-pmjdc/status" latency="9.58131ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:12.991572  116165 replica_set.go:439] Pod deployment-b58dbf467-pmjdc updated, objectMeta {Name:deployment-b58dbf467-pmjdc GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-pmjdc UID:68fcf779-57c0-4f1f-8458-1124e2190927 ResourceVersion:28146 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc03167dcfa BlockOwnerDeletion:0xc03167dcfb}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-pmjdc GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-pmjdc UID:68fcf779-57c0-4f1f-8458-1124e2190927 ResourceVersion:28224 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031d8385a BlockOwnerDeletion:0xc031d8385b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:12.991793  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:12.992102  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="20.138633ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:12.993871  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (22.274396ms)
I0801 04:48:12.993921  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:12.994322  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 6->8, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:13.002687  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-r965p/status" latency="9.66844ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:13.003977  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="20.294025ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:13.004295  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (21.344675ms)
I0801 04:48:13.004329  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.004324176 +0000 UTC m=+166.545474194)
I0801 04:48:13.005136  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:13.007815  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:13.008563  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="13.579969ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.008962  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (15.045185ms)
I0801 04:48:13.009131  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:13.009270  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (204.911µs)
I0801 04:48:13.009749  116165 replica_set.go:439] Pod deployment-b58dbf467-r965p updated, objectMeta {Name:deployment-b58dbf467-r965p GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-r965p UID:d33beb82-0f58-42da-8d5c-38116fceaf14 ResourceVersion:28149 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc0313e487a BlockOwnerDeletion:0xc0313e487b}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-r965p GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-r965p UID:d33beb82-0f58-42da-8d5c-38116fceaf14 ResourceVersion:28225 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031dc406a BlockOwnerDeletion:0xc031dc406b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:13.009870  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:13.009910  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:13.009991  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 8->9, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:13.011682  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="3.482912ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40806" resp=409
I0801 04:48:13.012224  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (7.894633ms)
I0801 04:48:13.013021  116165 deployment_controller.go:490] "Error syncing deployment" deployment="test-deployment-available-condition/deployment" err="Operation cannot be fulfilled on deployments.apps \"deployment\": the object has been modified; please apply your changes to the latest version and try again"
I0801 04:48:13.013062  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.013056359 +0000 UTC m=+166.554206374)
I0801 04:48:13.013447  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="3.1557ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.013580  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:13.013779  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (3.872209ms)
I0801 04:48:13.013819  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:13.013931  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (117.159µs)
I0801 04:48:13.014663  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-smdpf/status" latency="10.946937ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40614" resp=200
I0801 04:48:13.015168  116165 replica_set.go:439] Pod deployment-b58dbf467-smdpf updated, objectMeta {Name:deployment-b58dbf467-smdpf GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-smdpf UID:1c74e88d-4fdb-4e8c-b5e0-20fc9cf082d3 ResourceVersion:28142 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc0314e62ba BlockOwnerDeletion:0xc0314e62bb}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-b58dbf467-smdpf GenerateName:deployment-b58dbf467- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-b58dbf467-smdpf UID:1c74e88d-4fdb-4e8c-b5e0-20fc9cf082d3 ResourceVersion:28229 Generation:0 CreationTimestamp:2020-08-01 04:48:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:b58dbf467] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-b58dbf467 UID:108a0cc5-fc0b-42c0-a991-349f5be0053e Controller:0xc031c3df4a BlockOwnerDeletion:0xc031c3df4b}] Finalizers:[] ClusterName: ManagedFields:[]}.
I0801 04:48:13.015347  116165 replica_set.go:449] ReplicaSet "deployment-b58dbf467" will be enqueued after 3600s for availability check
I0801 04:48:13.015416  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:13.015555  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 9->10, availableReplicas 0->0, sequence No: 1->1
I0801 04:48:13.025433  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="9.420978ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.025566  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:13.025911  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (12.847011ms)
I0801 04:48:13.025952  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.025947017 +0000 UTC m=+166.567097029)
I0801 04:48:13.031687  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:13.031731  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="4.734188ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.032095  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (6.142487ms)
I0801 04:48:13.032137  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.032131622 +0000 UTC m=+166.573281638)
I0801 04:48:13.033241  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:13 +0000 UTC - now: 2020-08-01 04:48:13.033232325 +0000 UTC m=+166.574382345]
I0801 04:48:13.033318  116165 progress.go:195] Queueing up deployment "deployment" for a progress check after 7199s
I0801 04:48:13.033336  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (1.202132ms)
I0801 04:48:13.054680  116165 request.go:581] Throttling request took 196.64583ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/events
I0801 04:48:13.057665  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="2.674089ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:13.063226  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="3.499185ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.064141  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:13.066302  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.066291704 +0000 UTC m=+166.607441720)
I0801 04:48:13.064156  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (48.74046ms)
I0801 04:48:13.066583  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:13.072541  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (5.980225ms)
I0801 04:48:13.078182  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="3.725891ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.078566  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (12.2673ms)
I0801 04:48:13.078954  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:13.078985  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.078975327 +0000 UTC m=+166.620125351)
I0801 04:48:13.080003  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:13 +0000 UTC - now: 2020-08-01 04:48:13.079994801 +0000 UTC m=+166.621144826]
I0801 04:48:13.080287  116165 progress.go:195] Queueing up deployment "deployment" for a progress check after 7199s
I0801 04:48:13.080968  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (1.987758ms)
I0801 04:48:13.127262  116165 request.go:581] Throttling request took 108.628892ms, request: GET:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0801 04:48:13.131179  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="3.530471ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
E0801 04:48:13.187521  116165 event.go:273] Unable to write event: 'Post "http://127.0.0.1:39349/api/v1/namespaces/test-scaled-rollout-deployment/events": dial tcp 127.0.0.1:39349: connect: connection refused' (may retry after sleeping)
I0801 04:48:13.253814  116165 request.go:581] Throttling request took 195.75436ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/events
I0801 04:48:13.258268  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="3.99453ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:13.327176  116165 request.go:581] Throttling request took 195.483709ms, request: GET:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0801 04:48:13.330149  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="2.628114ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.453864  116165 request.go:581] Throttling request took 195.0817ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/events
I0801 04:48:13.459214  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="4.195338ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:13.527143  116165 request.go:581] Throttling request took 196.547447ms, request: GET:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0801 04:48:13.530603  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="3.058986ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.653958  116165 request.go:581] Throttling request took 194.301868ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/events
I0801 04:48:13.662655  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="8.326666ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:13.727207  116165 request.go:581] Throttling request took 196.126127ms, request: GET:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0801 04:48:13.730332  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="2.757353ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.853835  116165 request.go:581] Throttling request took 190.638252ms, request: POST:http://127.0.0.1:35065/api/v1/namespaces/test-deployment-available-condition/events
I0801 04:48:13.861798  116165 httplog.go:89] "HTTP" verb="POST" URI="/api/v1/namespaces/test-deployment-available-condition/events" latency="7.660949ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=201
I0801 04:48:13.927760  116165 request.go:581] Throttling request took 196.606798ms, request: PUT:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0801 04:48:13.934563  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:13.934712  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.934685095 +0000 UTC m=+167.475835115)
I0801 04:48:13.934970  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="6.878054ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40568" resp=200
    deployment.go:281: Updating deployment deployment
I0801 04:48:13.945078  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:13.945261  116165 replica_set_utils.go:59] Updating status for : test-deployment-available-condition/deployment-b58dbf467, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 10->10, availableReplicas 0->9, sequence No: 1->2
I0801 04:48:13.945514  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:13.946149  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467" latency="3.620529ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:13.948279  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467/status" latency="2.550328ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller" srcIP="127.0.0.1:40568" resp=200
I0801 04:48:13.949416  116165 deployment_controller.go:281] ReplicaSet deployment-b58dbf467 updated.
I0801 04:48:13.950269  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (5.197395ms)
I0801 04:48:13.950452  116165 controller_utils.go:186] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-b58dbf467", timestamp:time.Time{wall:0xbfc15ae2f2dde1a0, ext:165394552047, loc:(*time.Location)(0x6df5920)}}
I0801 04:48:13.950897  116165 replica_set.go:649] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-b58dbf467" (451.549µs)
I0801 04:48:13.953044  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-b58dbf467" latency="5.736977ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40806" resp=409
I0801 04:48:13.953303  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (18.612279ms)
I0801 04:48:13.953365  116165 deployment_controller.go:490] "Error syncing deployment" deployment="test-deployment-available-condition/deployment" err="Operation cannot be fulfilled on replicasets.apps \"deployment-b58dbf467\": the object has been modified; please apply your changes to the latest version and try again"
I0801 04:48:13.953394  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.953389455 +0000 UTC m=+167.494539474)
I0801 04:48:13.957607  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status" latency="3.529445ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:13.957896  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (4.501055ms)
I0801 04:48:13.958137  116165 deployment_controller.go:176] Updating deployment deployment
I0801 04:48:13.958160  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.95815594 +0000 UTC m=+167.499305954)
I0801 04:48:13.963218  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:13 +0000 UTC - now: 2020-08-01 04:48:13.963207564 +0000 UTC m=+167.504357582]
I0801 04:48:13.963295  116165 progress.go:195] Queueing up deployment "deployment" for a progress check after 7199s
I0801 04:48:13.963389  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (5.176265ms)
I0801 04:48:13.963459  116165 deployment_controller.go:570] Started syncing deployment "test-deployment-available-condition/deployment" (2020-08-01 04:48:13.963418168 +0000 UTC m=+167.504568173)
I0801 04:48:13.980537  116165 deployment_util.go:808] Deployment "deployment" timed out (false) [last progress check: 2020-08-01 04:48:13 +0000 UTC - now: 2020-08-01 04:48:13.980522123 +0000 UTC m=+167.521672140]
I0801 04:48:13.980615  116165 progress.go:195] Queueing up deployment "deployment" for a progress check after 7199s
I0801 04:48:13.981235  116165 deployment_controller.go:572] Finished syncing deployment "test-deployment-available-condition/deployment" (17.803097ms)
I0801 04:48:14.127226  116165 request.go:581] Throttling request took 185.138725ms, request: GET:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0801 04:48:14.137931  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="4.389107ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:14.327187  116165 request.go:581] Throttling request took 188.766598ms, request: GET:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0801 04:48:14.329338  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="1.80609ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:14.527188  116165 request.go:581] Throttling request took 197.330786ms, request: GET:http://127.0.0.1:35065/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I0801 04:48:14.537639  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment" latency="10.114995ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40806" resp=200
    deployment_test.go:990: unexpected .replicas: expect 10, got 9
I0801 04:48:14.538081  116165 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0801 04:48:14.538431  116165 deployment_controller.go:165] Shutting down deployment controller
I0801 04:48:14.538456  116165 replica_set.go:194] Shutting down replicaset controller
I0801 04:48:14.538524  116165 reflector.go:213] Stopping reflector *v1.Deployment (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0801 04:48:14.538545  116165 reflector.go:213] Stopping reflector *v1.ReplicaSet (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0801 04:48:14.538563  116165 reflector.go:213] Stopping reflector *v1.Pod (12h0m0s) from k8s.io/client-go/informers/factory.go:134
I0801 04:48:14.538762  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/pods?allowWatchBookmarks=true&resourceVersion=27583&timeout=8m43s&timeoutSeconds=523&watch=true" latency="2.799510925s" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers" srcIP="127.0.0.1:39918" resp=0
I0801 04:48:14.538820  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=28124&timeout=7m6s&timeoutSeconds=426&watch=true" latency="2.800183675s" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers" srcIP="127.0.0.1:40604" resp=0
I0801 04:48:14.538762  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=27611&timeout=8m9s&timeoutSeconds=489&watch=true" latency="2.79996852s" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers" srcIP="127.0.0.1:40606" resp=0
I0801 04:48:14.541071  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="2.106536ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:14.549134  116165 httplog.go:89] "HTTP" verb="PUT" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="7.40117ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40806" resp=200
I0801 04:48:14.551305  116165 httplog.go:89] "HTTP" verb="GET" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency="1.547724ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40806" resp=200
W0801 04:48:14.551494  116165 warnings.go:67] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.22+, unavailable in v1.25+
I0801 04:48:14.559852  116165 httplog.go:89] "HTTP" verb="PUT" URI="/apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes" latency="7.908354ms" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:40806" resp=200
W0801 04:48:14.560069  116165 warnings.go:67] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.22+, unavailable in v1.25+
I0801 04:48:14.560461  116165 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0801 04:48:14.560978  116165 reflector.go:213] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0801 04:48:14.561180  116165 httplog.go:89] "HTTP" verb="GET" URI="/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=27583&timeout=9m51s&timeoutSeconds=591&watch=true" latency="6.435170456s" userAgent="deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format" srcIP="127.0.0.1:39908" resp=0
--- FAIL: TestDeploymentAvailableCondition (7.01s)

				from junit_20200801-043839.xml

Find deployment-b58dbf467-fwgcl mentions in log files | View test history on testgrid


Show 2946 Passed Tests

Show 25 Skipped Tests

Error lines from build-log.txt

... skipping 61 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0801 04:13:59] Call tree:
!!! [0801 04:13:59]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0801 04:13:59]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0801 04:13:59]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0801 04:13:59]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0801 04:13:59]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0801 04:13:59] Running kubeadm tests
+++ [0801 04:14:13] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0801 04:15:37] Running tests without code coverage
{"Time":"2020-08-01T04:18:02.808174219Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t82.357s\n"}
✓  cmd/kubeadm/test/cmd (1m22.361s)
... skipping 323 lines ...
I0801 04:21:35.476876   54481 client.go:360] parsed scheme: "passthrough"
I0801 04:21:35.476952   54481 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0801 04:21:35.476969   54481 clientconn.go:948] ClientConn switching balancer to "pick_first"
+++ [0801 04:21:58] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0801 04:21:59.838672   58014 serving.go:331] Generated self-signed cert in-memory
W0801 04:22:00.654856   58014 authentication.go:368] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0801 04:22:00.654929   58014 authentication.go:265] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0801 04:22:00.654937   58014 authentication.go:289] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0801 04:22:00.654954   58014 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0801 04:22:00.654970   58014 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0801 04:22:00.654992   58014 controllermanager.go:175] Version: v1.20.0-alpha.0.450+54ac3df7d062d3
I0801 04:22:00.656572   58014 secure_serving.go:197] Serving securely on [::]:10257
I0801 04:22:00.656837   58014 tlsconfig.go:240] Starting DynamicServingCertificateController
I0801 04:22:00.666241   58014 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0801 04:22:00.666347   58014 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 90 lines ...
I0801 04:22:01.461864   58014 endpoints_controller.go:181] Starting endpoint controller
W0801 04:22:01.461884   58014 controllermanager.go:541] Skipping "ephemeral-volume"
I0801 04:22:01.461885   58014 shared_informer.go:240] Waiting for caches to sync for endpoint
I0801 04:22:01.462018   58014 gc_controller.go:89] Starting GC controller
I0801 04:22:01.462040   58014 shared_informer.go:240] Waiting for caches to sync for GC
I0801 04:22:01.462255   58014 node_lifecycle_controller.go:77] Sending events to api server
E0801 04:22:01.462914   58014 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided
W0801 04:22:01.462930   58014 controllermanager.go:541] Skipping "cloud-node-lifecycle"
I0801 04:22:01.463434   58014 controllermanager.go:549] Started "deployment"
I0801 04:22:01.463470   58014 deployment_controller.go:153] Starting deployment controller
I0801 04:22:01.463485   58014 shared_informer.go:240] Waiting for caches to sync for deployment
I0801 04:22:01.463830   58014 controllermanager.go:549] Started "statefulset"
I0801 04:22:01.463859   58014 stateful_set.go:146] Starting stateful set controller
... skipping 22 lines ...
I0801 04:22:01.468042   58014 node_lifecycle_controller.go:380] Sending events to api server.
I0801 04:22:01.468211   58014 taint_manager.go:163] Sending events to api server.
I0801 04:22:01.468934   58014 node_lifecycle_controller.go:508] Controller will reconcile labels.
I0801 04:22:01.468976   58014 controllermanager.go:549] Started "nodelifecycle"
I0801 04:22:01.469087   58014 node_lifecycle_controller.go:542] Starting node controller
I0801 04:22:01.469101   58014 shared_informer.go:240] Waiting for caches to sync for taint
E0801 04:22:01.469418   58014 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0801 04:22:01.469440   58014 controllermanager.go:541] Skipping "service"
W0801 04:22:01.469739   58014 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0801 04:22:01.469761   58014 controllermanager.go:549] Started "csrcleaner"
I0801 04:22:01.470089   58014 cleaner.go:83] Starting CSR cleaner controller
I0801 04:22:01.470122   58014 controllermanager.go:549] Started "ttl"
I0801 04:22:01.470242   58014 ttl_controller.go:118] Starting TTL controller
... skipping 36 lines ...
I0801 04:22:01.667335   58014 shared_informer.go:247] Caches are synced for PVC protection 
I0801 04:22:01.668031   58014 shared_informer.go:247] Caches are synced for job 
I0801 04:22:01.669335   58014 shared_informer.go:247] Caches are synced for taint 
I0801 04:22:01.669438   58014 taint_manager.go:187] Starting NoExecuteTaintManager
I0801 04:22:01.671020   58014 shared_informer.go:247] Caches are synced for HPA 
I0801 04:22:01.671537   58014 shared_informer.go:247] Caches are synced for expand 
E0801 04:22:01.673556   58014 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0801 04:22:01.675497   58014 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
node/127.0.0.1 created
W0801 04:22:01.689923   58014 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0801 04:22:01.698305   58014 shared_informer.go:247] Caches are synced for endpoint_slice 
I0801 04:22:01.698390   58014 shared_informer.go:247] Caches are synced for ReplicaSet 
I0801 04:22:01.712870   58014 shared_informer.go:247] Caches are synced for ReplicationController 
I0801 04:22:01.753555   58014 shared_informer.go:247] Caches are synced for daemon sets 
+++ [0801 04:22:01] Checking kubectl version
I0801 04:22:01.764205   58014 shared_informer.go:247] Caches are synced for stateful set 
... skipping 133 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0801 04:22:17] Creating namespace namespace-1596255737-14172
namespace/namespace-1596255737-14172 created
Context "test" modified.
+++ [0801 04:22:17] Testing RESTMapper
+++ [0801 04:22:18] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 59 lines ...
namespace/namespace-1596255748-14457 created
Context "test" modified.
+++ [0801 04:22:29] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
I0801 04:22:38.652004   54481 client.go:360] parsed scheme: "passthrough"
I0801 04:22:38.652085   54481 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0801 04:22:38.652098   54481 clientconn.go:948] ClientConn switching balancer to "pick_first"
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 62 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 29 lines ...
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1596255774-17624 namespace.
has:Role is deprecated
Successful
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1596255774-17624 namespace.
Error: 1 warning received
has:Role is deprecated
Successful
message:Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
No resources found in namespace-1596255774-17624 namespace.
Error: 1 warning received
has:Error: 1 warning received
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:163: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:164: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:165: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 464 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:210: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:215: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 19 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:259: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:265: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:269: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:275: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 224 lines ...
core.sh:534: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.2:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:554: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0801 04:24:29] "kubectl patch with resourceVersion 666" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:578: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-create kubectl-patch kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0801 04:24:33.557621   58014 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:606: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:631: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
I0801 04:24:36.695150   58014 event.go:291] "Event occurred" object="node-v1-test" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node node-v1-test event: Registered Node node-v1-test in Controller"
... skipping 34 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:683: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:687: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:699: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0801 04:25:14] Creating namespace namespace-1596255914-1778
namespace/namespace-1596255914-1778 created
Context "test" modified.
+++ [0801 04:25:14] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 42 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0801 04:25:15] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 35 lines ...
I0801 04:25:27.385840   58014 event.go:291] "Event occurred" object="namespace-1596255916-1699/test-deployment-retainkeys-8695b756f8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-8695b756f8-fz7jb"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0801 04:25:31.627189   66358 helpers.go:567] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 11 lines ...
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
I0801 04:25:40.591238   54481 client.go:360] parsed scheme: "endpoint"
I0801 04:25:40.591299   54481 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0801 04:25:40.604847   54481 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj created (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
namespace/nsb created
apply.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0801 04:25:42.389851   54481 ???:1] sending watch cancel request for closed watcher{watch-id 11 0  <nil>}
W0801 04:25:42.389956   54481 ???:1] failed to send watch cancel request{watch-id 11 0  <nil>} {error 25 0  EOF}
pod/a created
apply.sh:161: Successful get pods a -n nsb {{.metadata.name}}: a
(Bpod/b created
pod/a pruned
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
apply.sh:165: Successful get pods b -n nsb {{.metadata.name}}: b
(BSuccessful
message:Error from server (NotFound): pods "a" not found
has:pods "a" not found
pod "b" deleted
apply.sh:175: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:180: Successful get pods a {{.metadata.name}}: a
(BSuccessful
message:Error from server (NotFound): pods "b" not found
has:pods "b" not found
pod/b created
apply.sh:188: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:189: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
service/prune-svc created
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
apply.sh:201: Successful get pods a {{.metadata.name}}: a
... skipping 42 lines ...
(Bpod/b created
apply.sh:242: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Successful
message:Error from server (NotFound): pods "a" not found
has:pods "a" not found
apply.sh:249: Successful get pods b -n nsb {{.metadata.name}}: b
(Bnamespace "nsb" deleted
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:260: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:264: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIP: Invalid value: "10.0.0.12": field is immutable
... skipping 29 lines ...
I0801 04:26:34.922721   54481 clientconn.go:948] ClientConn switching balancer to "pick_first"
apply.sh:287: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:295: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:303: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
apply.sh:309: Successful get configmaps {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:315: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:321: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 7 lines ...
apply.sh:329: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0801 04:26:48.303586   58014 namespace_controller.go:185] Namespace has been deleted multi-resource-ns
apply.sh:333: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
I0801 04:26:50.200396   54481 client.go:360] parsed scheme: "endpoint"
I0801 04:26:50.200967   54481 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:339: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0801 04:26:51.363158   54481 controller.go:606] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
... skipping 9 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_server_side_apply_tests
+++ [0801 04:26:52] Creating namespace namespace-1596256012-8712
namespace/namespace-1596256012-8712 created
Context "test" modified.
I0801 04:26:53.074371   54481 ???:1] sending watch cancel request for closed watcher{watch-id 11 0  <nil>}
W0801 04:26:53.074446   54481 ???:1] failed to send watch cancel request{watch-id 11 0  <nil>} {error 25 0  EOF}
+++ [0801 04:26:53] Testing kubectl apply --server-side
apply.sh:359: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/test-pod serverside-applied
apply.sh:363: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
(BSuccessful
message:kubectl
... skipping 13 lines ...
message:1063
has:1063
pod "test-pod" deleted
apply.sh:398: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0801 04:27:01] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 55 lines ...
I0801 04:27:07.315460   54481 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0801 04:27:07.315472   54481 clientconn.go:948] ClientConn switching balancer to "pick_first"
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
I0801 04:27:08.204424   54481 client.go:360] parsed scheme: "endpoint"
I0801 04:27:08.204894   54481 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_run_tests
+++ [0801 04:27:09] Creating namespace namespace-1596256029-30181
namespace/namespace-1596256029-30181 created
I0801 04:27:09.811530   54481 ???:1] sending watch cancel request for closed watcher{watch-id 11 0  <nil>}
W0801 04:27:09.811609   54481 ???:1] failed to send watch cancel request{watch-id 11 0  <nil>} {error 25 0  EOF}
Context "test" modified.
+++ [0801 04:27:09] Testing kubectl run
pod/nginx-extensions created (dry run)
pod/nginx-extensions created (server dry run)
run.sh:32: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Brun.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 2 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0801 04:27:14] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I0801 04:27:23.097514   58014 event.go:291] "Event occurred" object="namespace-1596256036-5489/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-9bb9c4878 to 3"
I0801 04:27:23.133024   58014 event.go:291] "Event occurred" object="namespace-1596256036-5489/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-987pk"
I0801 04:27:23.140215   58014 event.go:291] "Event occurred" object="namespace-1596256036-5489/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-fl7mc"
I0801 04:27:23.141093   58014 event.go:291] "Event occurred" object="namespace-1596256036-5489/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-7s9mt"
apps.sh:152: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1596256036-5489\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1596256036-5489"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0801 04:27:33.996133   58014 event.go:291] "Event occurred" object="namespace-1596256036-5489/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-6dd6cfdb57 to 3"
I0801 04:27:34.001799   58014 event.go:291] "Event occurred" object="namespace-1596256036-5489/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-9lkxq"
I0801 04:27:34.010944   58014 event.go:291] "Event occurred" object="namespace-1596256036-5489/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-82rg5"
I0801 04:27:34.011638   58014 event.go:291] "Event occurred" object="namespace-1596256036-5489/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-snk7w"
Successful
... skipping 323 lines ...
+++ [0801 04:27:57] Creating namespace namespace-1596256077-1907
namespace/namespace-1596256077-1907 created
Context "test" modified.
+++ [0801 04:27:58] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1596256077-1907 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1596256077-1907 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0801 04:28:04.105806   69838 loader.go:375] Config loaded from file:  /tmp/tmp.tXMqigYdoy/.kube/config
I0801 04:28:04.107884   69838 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0801 04:28:04.184123   69838 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0801 04:28:04.186596   69838 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 627 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-08-01T04:28:14Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2020-08-01T04:28:14Z"}}, "name":"valid-pod", "namespace":"namespace-1596256093-12210", "resourceVersion":"1302", "selfLink":"/api/v1/namespaces/namespace-1596256093-12210/pods/valid-pod", "uid":"900f20d2-5cf9-499d-a9b7-22ad09b0bbf8"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-08-01T04:28:14Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2020-08-01T04:28:14Z"}],"name":"valid-pod","namespace":"namespace-1596256093-12210","resourceVersion":"1302","selfLink":"/api/v1/namespaces/namespace-1596256093-12210/pods/valid-pod","uid":"900f20d2-5cf9-499d-a9b7-22ad09b0bbf8"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-08-01T04:28:14Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2020-08-01T04:28:14Z]] name:valid-pod namespace:namespace-1596256093-12210 resourceVersion:1302 selfLink:/api/v1/namespaces/namespace-1596256093-12210/pods/valid-pod uid:900f20d2-5cf9-499d-a9b7-22ad09b0bbf8] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 158 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 39 lines ...
+++ [0801 04:28:27] Creating namespace namespace-1596256107-3537
namespace/namespace-1596256107-3537 created
Context "test" modified.
+++ [0801 04:28:28] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0801 04:28:30] Creating namespace namespace-1596256110-27635
namespace/namespace-1596256110-27635 created
Context "test" modified.
+++ [0801 04:28:31] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0801 04:28:33.359976   58014 event.go:291] "Event occurred" object="namespace-1596256110-27635/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-4brfq"
I0801 04:28:33.365738   58014 event.go:291] "Event occurred" object="namespace-1596256110-27635/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-2v22k"
I0801 04:28:33.365797   58014 event.go:291] "Event occurred" object="namespace-1596256110-27635/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-6cvw4"
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-2v22k does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-2v22k does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"3aff8da1-22ca-4387-87f4-d8235e779ea3","resourceVersion":"1401","creationTimestamp":"2020-08-01T04:28:37Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"3aff8da1-22ca-4387-87f4-d8235e779ea3","resourceVersion":"1402","creationTimestamp":"2020-08-01T04:28:37Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"3aff8da1-22ca-4387-87f4-d8235e779ea3","resourceVersion":"1402","creationTimestamp":"2020-08-01T04:28:37Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"3aff8da1-22ca-4387-87f4-d8235e779ea3"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 173 lines ...
has:Timeout
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 261 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0801 04:29:13] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 358 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
W0801 04:29:51.364076   54481 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
I0801 04:29:51.364114   54481 ???:1] sending watch cancel request for closed watcher{watch-id 11 0  <nil>}
E0801 04:29:51.366748   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [0801 04:29:51] Testing recursive resources
+++ [0801 04:29:51] Creating namespace namespace-1596256191-21766
W0801 04:29:51.665884   54481 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
I0801 04:29:51.666319   54481 ???:1] sending watch cancel request for closed watcher{watch-id 11 0  <nil>}
W0801 04:29:51.666373   54481 ???:1] failed to send watch cancel request{watch-id 11 0  <nil>} {error 25 0  EOF}
E0801 04:29:51.667959   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1596256191-21766 created
W0801 04:29:51.920847   54481 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
E0801 04:29:51.923001   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
W0801 04:29:52.190286   54481 cacher.go:148] Terminating all watchers from cacher *unstructured.Unstructured
I0801 04:29:52.190329   54481 ???:1] sending watch cancel request for closed watcher{watch-id 11 0  <nil>}
E0801 04:29:52.194318   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0801 04:29:52.960922   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0801 04:29:53.109942   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0801 04:29:53.194978   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0801 04:29:53.415921   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
I0801 04:29:54.434767   58014 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0801 04:29:54.977377   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0801 04:29:55.375804   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0801 04:29:55.746478   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0801 04:29:56.481702   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Name:         busybox0
Namespace:    namespace-1596256191-21766
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 158 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
E0801 04:29:59.056976   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0801 04:29:59.966756   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-54785cbcb8 to 3"
I0801 04:29:59.997555   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx-54785cbcb8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-54785cbcb8-xgvlg"
I0801 04:30:00.135980   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx-54785cbcb8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-54785cbcb8-jhjlv"
I0801 04:30:00.136017   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx-54785cbcb8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-54785cbcb8-ll2pr"
E0801 04:30:00.510110   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bkubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
E0801 04:30:01.310985   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
(BSuccessful
message:apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 32 lines ...
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:extensions/v1beta1
E0801 04:30:01.608993   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx" deleted
Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: busybox0:busybox1:, got: busybox0:busybox1:nginx-54785cbcb8-jhjlv:nginx-54785cbcb8-ll2pr:nginx-54785cbcb8-xgvlg:
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0801 04:30:06.859728   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-m9kmv"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0801 04:30:06.867043   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-ptdtj"
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BI0801 04:30:08.698239   58014 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0801 04:30:08.698308   58014 shared_informer.go:247] Caches are synced for garbage collector 
E0801 04:30:08.814307   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
I0801 04:30:09.154236   58014 shared_informer.go:240] Waiting for caches to sync for resource quota
I0801 04:30:09.154293   58014 shared_informer.go:247] Caches are synced for resource quota 
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
E0801 04:30:09.560745   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE0801 04:30:10.627642   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0801 04:30:12.606792   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-2r4wx"
I0801 04:30:12.626037   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-qpngh"
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0801 04:30:13.376699   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0801 04:30:15.030742   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx1-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-758b5949b6 to 2"
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0801 04:30:15.035443   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx1-deployment-758b5949b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-758b5949b6-5j78s"
I0801 04:30:15.041558   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx1-deployment-758b5949b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-758b5949b6-kszxx"
I0801 04:30:15.041997   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx0-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-75db9cdfd9 to 2"
I0801 04:30:15.046506   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx0-deployment-75db9cdfd9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-75db9cdfd9-5t5bx"
I0801 04:30:15.051624   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/nginx0-deployment-75db9cdfd9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-75db9cdfd9-m5t45"
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0801 04:30:20.281658   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-k77vn"
I0801 04:30:20.294115   58014 event.go:291] "Event occurred" object="namespace-1596256191-21766/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-g9t8x"
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
... skipping 3 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
I0801 04:30:21.389011   54481 client.go:360] parsed scheme: "passthrough"
I0801 04:30:21.389096   54481 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0801 04:30:21.389112   54481 clientconn.go:948] ClientConn switching balancer to "pick_first"
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
E0801 04:30:22.894132   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0801 04:30:23] Testing kubectl(v1:namespaces)
I0801 04:30:23.313110   58014 horizontal.go:354] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1596256191-21766
I0801 04:30:23.317224   58014 horizontal.go:354] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1596256191-21766
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1459: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
E0801 04:30:27.727984   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1468: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 31 lines ...
namespace "namespace-1596256123-20940" deleted
namespace "namespace-1596256123-9829" deleted
namespace "namespace-1596256129-14097" deleted
namespace "namespace-1596256134-17149" deleted
namespace "namespace-1596256140-29001" deleted
namespace "namespace-1596256191-21766" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1596255726-30375" deleted
... skipping 29 lines ...
namespace "namespace-1596256123-20940" deleted
namespace "namespace-1596256123-9829" deleted
namespace "namespace-1596256129-14097" deleted
namespace "namespace-1596256134-17149" deleted
namespace "namespace-1596256140-29001" deleted
namespace "namespace-1596256191-21766" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1475: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1476: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
E0801 04:30:32.798338   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
resourcequota/test-quota created (server dry run)
core.sh:1480: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created
core.sh:1483: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found:
(BI0801 04:30:34.699826   58014 resource_quota_controller.go:306] Resource quota has been deleted quotas/test-quota
resourcequota "test-quota" deleted
namespace "quotas" deleted
E0801 04:30:35.755859   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0801 04:30:40.434783   58014 namespace_controller.go:185] Namespace has been deleted my-namespace
core.sh:1495: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1499: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1503: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0801 04:30:41.711839   58014 namespace_controller.go:185] Namespace has been deleted kube-node-lease
... skipping 34 lines ...
I0801 04:30:42.597342   58014 namespace_controller.go:185] Namespace has been deleted namespace-1596256123-9829
I0801 04:30:42.622358   58014 namespace_controller.go:185] Namespace has been deleted namespace-1596256140-29001
core.sh:1507: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BI0801 04:30:42.745619   58014 namespace_controller.go:185] Namespace has been deleted namespace-1596256191-21766
core.sh:1509: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1516: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1520: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 48 lines ...
has not:example.com
core.sh:823: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
(Bnamespace/test-secrets created
core.sh:827: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
(Bcore.sh:831: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
E0801 04:30:52.766133   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:835: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:836: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
(Bsecret "test-secret" deleted
core.sh:846: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
I0801 04:30:54.582774   58014 namespace_controller.go:185] Namespace has been deleted other
... skipping 22 lines ...
(Bcore.sh:910: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:911: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:920: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
E0801 04:31:05.887661   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 7 lines ...
(Bconfigmap "test-configmap" deleted
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(BE0801 04:31:13.111712   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap/test-configmap created (dry run)
I0801 04:31:13.655286   58014 namespace_controller.go:185] Namespace has been deleted test-secrets
configmap/test-configmap created (server dry run)
core.sh:46: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created
configmap/test-binary-configmap created
core.sh:51: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:52: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(BE0801 04:31:15.973352   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests
... skipping 3 lines ...
+++ command: run_client_config_tests
+++ [0801 04:31:23] Creating namespace namespace-1596256283-7732
namespace/namespace-1596256283-7732 created
Context "test" modified.
+++ [0801 04:31:23] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 6 lines ...
I0801 04:31:27.526764   58014 namespace_controller.go:185] Namespace has been deleted test-configmaps
core.sh:941: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
(Bnamespace/test-service-accounts created
core.sh:945: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
(Bcore.sh:949: Successful get serviceaccount --namespace=test-service-accounts {{range.items}}{{ if eq .metadata.name \"test-service-account\" }}found{{end}}{{end}}:: :
(Bserviceaccount/test-service-account created (dry run)
E0801 04:31:29.029859   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
serviceaccount/test-service-account created (server dry run)
core.sh:953: Successful get serviceaccount --namespace=test-service-accounts {{range.items}}{{ if eq .metadata.name \"test-service-account\" }}found{{end}}{{end}}:: :
(Bserviceaccount/test-service-account created
core.sh:957: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
(Bserviceaccount "test-service-account" deleted
namespace "test-service-accounts" deleted
... skipping 28 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 40 lines ...
Labels:         controller-uid=f299157a-f6d4-40e7-a049-f0b82642aa9a
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Parallelism:    1
Completions:    1
Start Time:     Sat, 01 Aug 2020 04:31:43 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=f299157a-f6d4-40e7-a049-f0b82642aa9a
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 33 lines ...
(Bjob.batch "test-job" deleted
I0801 04:31:52.839003   58014 event.go:291] "Event occurred" object="namespace-1596256311-23336/test-job-pi" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-job-pi-47mnz"
job.batch/test-job-pi created
create.sh:112: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
(Bjob.batch "test-job-pi" deleted
cronjob.batch/test-pi created
E0801 04:31:53.975679   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0801 04:31:54.078874   58014 event.go:291] "Event occurred" object="namespace-1596256311-23336/my-pi" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: my-pi-zqwgf"
job.batch/my-pi created
Successful
message:[perl -Mbignum=bpi -wle print bpi(10)]
has:perl -Mbignum=bpi -wle print bpi(10)
job.batch "my-pi" deleted
... skipping 4 lines ...

+++ Running case: test-cmd.run_pod_templates_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_pod_templates_tests
+++ [0801 04:31:55] Creating namespace namespace-1596256315-22485
I0801 04:31:55.623052   58014 namespace_controller.go:185] Namespace has been deleted test-jobs
E0801 04:31:55.663750   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1596256315-22485 created
Context "test" modified.
+++ [0801 04:31:56] Testing pod templates
core.sh:1581: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0801 04:31:57.296981   54481 controller.go:606] quota admission added evaluator for: podtemplates
podtemplate/nginx created
... skipping 81 lines ...
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(B
E0801 04:32:03.752843   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Labels:
matched Selector:
matched IP:
matched Port:
matched Endpoints:
... skipping 248 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
E0801 04:32:07.550620   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-08-01T04:32:01Z"
  labels:
    app: redis
... skipping 49 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
message:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1020: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1033: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1040: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1044: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
... skipping 70 lines ...
+++ [0801 04:32:29] Testing kubectl(v1:daemonsets)
apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0801 04:32:30.242247   54481 controller.go:606] quota admission added evaluator for: daemonsets.apps
daemonset.apps/bind created
I0801 04:32:30.258799   54481 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
(BE0801 04:32:31.201479   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind configured
apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind image updated
apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
(Bdaemonset.apps/bind env updated
apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
... skipping 20 lines ...
(Bdaemonset.apps/bind created
apps.sh:73: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1596256355-3656"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
apps.sh:76: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:77: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BE0801 04:32:38.522988   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind configured
apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:81: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:82: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bapps.sh:83: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1596256355-3656"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1596256355-3656"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
... skipping 6 lines ...
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
 (dry run)
E0801 04:32:41.118036   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind rolled back (server dry run)
apps.sh:87: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
E0801 04:32:43.274055   58014 daemon_controller.go:320] namespace-1596256355-3656/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1596256355-3656", SelfLink:"/apis/apps/v1/namespaces/namespace-1596256355-3656/daemonsets/bind", UID:"0f235e9e-4642-4a67-b3f0-c8813b3945e1", ResourceVersion:"2410", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731853156, loc:(*time.Location)(0x6a38ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1596256355-3656\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0019212e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001921300)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001921340), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019213e0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001921420), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001921460)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001921520), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a07d78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0005cc460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001921540), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001ed0258)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002a07dcc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:92: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:93: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:98: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0801 04:32:45.342604   58014 daemon_controller.go:320] namespace-1596256355-3656/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1596256355-3656", SelfLink:"/apis/apps/v1/namespaces/namespace-1596256355-3656/daemonsets/bind", UID:"0f235e9e-4642-4a67-b3f0-c8813b3945e1", ResourceVersion:"2415", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731853156, loc:(*time.Location)(0x6a38ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1596256355-3656\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cecc60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cecca0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cecce0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cecd20)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001cecd60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001cecda0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001cecde0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003098668), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0002451f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001cece20), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000b0ae00)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0030986bc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:101: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:102: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:103: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1596256367-21439
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1596256367-21439
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1596256367-21439
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1596256367-21439
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1596256367-21439
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1596256367-21439
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1596256367-21439
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1596256367-21439
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
core.sh:1224: Successful get rc frontend {{.spec.replicas}}: 3
(BE0801 04:32:54.596907   58014 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1596256367-21439 /api/v1/namespaces/namespace-1596256367-21439/replicationcontrollers/frontend ca76b214-1055-4fb3-8fd2-6fec95281d2b 2459 2 2020-08-01 04:32:50 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kube-controller-manager Update v1 2020-08-01 04:32:50 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}} {kubectl-create Update v1 2020-08-01 04:32:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{},"f:tier":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002fc1fc8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
replicationcontroller/frontend scaled
I0801 04:32:54.610013   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-dbkrv"
core.sh:1228: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1232: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1236: Successful get rc frontend {{.spec.replicas}}: 2
(BE0801 04:32:56.032125   58014 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1240: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0801 04:32:56.399777   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-bzkth"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 3
(BE0801 04:32:57.414257   58014 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1596256367-21439 /api/v1/namespaces/namespace-1596256367-21439/replicationcontrollers/frontend ca76b214-1055-4fb3-8fd2-6fec95281d2b 2473 4 2020-08-01 04:32:50 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kubectl-create Update v1 2020-08-01 04:32:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{},"f:tier":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update v1 2020-08-01 04:32:56 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002cf30a8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
... skipping 30 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0801 04:33:03.983155   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-76b5cd66f5 to 3"
I0801 04:33:03.995365   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-76b5cd66f5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-76b5cd66f5-r8jv6"
I0801 04:33:03.999461   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-76b5cd66f5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-76b5cd66f5-r8n4t"
... skipping 23 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1391: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1395: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1404: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0801 04:33:22.738150   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-748ddcb48b to 3"
I0801 04:33:22.748232   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-resources-748ddcb48b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-748ddcb48b-tq99x"
I0801 04:33:22.751839   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-resources-748ddcb48b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-748ddcb48b-ldnpc"
I0801 04:33:22.755798   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-resources-748ddcb48b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-748ddcb48b-625nc"
core.sh:1410: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1411: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1412: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0801 04:33:24.299811   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-7bfb7d56b6 to 1"
I0801 04:33:24.306661   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-resources-7bfb7d56b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-7bfb7d56b6-hb4d9"
core.sh:1415: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1416: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0801 04:33:25.623668   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-748ddcb48b to 2"
I0801 04:33:25.637221   58014 event.go:291] "Event occurred" object="namespace-1596256367-21439/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1