This job view page is being replaced by Spyglass soon. Check out the new job view.
PRzouyee: using string instead of byte
ResultFAILURE
Tests 1 failed / 2610 succeeded
Started2020-01-14 22:39
Elapsed27m27s
Revisionbae72ee8d2e605149a736987071c63d3e0ee99c9
Refs 86569

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestScorePlugin 4.18s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestScorePlugin$
=== RUN   TestScorePlugin
W0114 23:01:20.816810  109940 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0114 23:01:20.816835  109940 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I0114 23:01:20.816848  109940 master.go:308] Node port range unspecified. Defaulting to 30000-32767.
I0114 23:01:20.816859  109940 master.go:264] Using reconciler: 
I0114 23:01:20.818673  109940 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.819009  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.819121  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.820847  109940 store.go:1350] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0114 23:01:20.821022  109940 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.820930  109940 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0114 23:01:20.821605  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.821640  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.822568  109940 watch_cache.go:409] Replace watchCache (rev: 26359) 
I0114 23:01:20.822865  109940 store.go:1350] Monitoring events count at <storage-prefix>//events
I0114 23:01:20.822945  109940 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I0114 23:01:20.822932  109940 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.823053  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.823078  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.824252  109940 store.go:1350] Monitoring limitranges count at <storage-prefix>//limitranges
I0114 23:01:20.824285  109940 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0114 23:01:20.824290  109940 watch_cache.go:409] Replace watchCache (rev: 26359) 
I0114 23:01:20.824439  109940 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.824569  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.824596  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.825555  109940 store.go:1350] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0114 23:01:20.825635  109940 watch_cache.go:409] Replace watchCache (rev: 26359) 
I0114 23:01:20.825668  109940 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0114 23:01:20.825760  109940 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.825902  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.825934  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.828256  109940 store.go:1350] Monitoring secrets count at <storage-prefix>//secrets
I0114 23:01:20.828373  109940 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0114 23:01:20.828426  109940 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.828534  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.828556  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.828649  109940 watch_cache.go:409] Replace watchCache (rev: 26360) 
I0114 23:01:20.829244  109940 store.go:1350] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0114 23:01:20.829410  109940 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.829564  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.829603  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.829686  109940 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0114 23:01:20.830063  109940 watch_cache.go:409] Replace watchCache (rev: 26360) 
I0114 23:01:20.830792  109940 store.go:1350] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0114 23:01:20.830852  109940 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0114 23:01:20.830956  109940 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.831071  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.831097  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.832758  109940 store.go:1350] Monitoring configmaps count at <storage-prefix>//configmaps
I0114 23:01:20.833153  109940 watch_cache.go:409] Replace watchCache (rev: 26360) 
I0114 23:01:20.833061  109940 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0114 23:01:20.833558  109940 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.833820  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.833882  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.833820  109940 watch_cache.go:409] Replace watchCache (rev: 26360) 
I0114 23:01:20.836115  109940 watch_cache.go:409] Replace watchCache (rev: 26360) 
I0114 23:01:20.836784  109940 store.go:1350] Monitoring namespaces count at <storage-prefix>//namespaces
I0114 23:01:20.837000  109940 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0114 23:01:20.836981  109940 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.837400  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.837503  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.838990  109940 watch_cache.go:409] Replace watchCache (rev: 26361) 
I0114 23:01:20.839499  109940 store.go:1350] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0114 23:01:20.839592  109940 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0114 23:01:20.839655  109940 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.839850  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.839894  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.841274  109940 store.go:1350] Monitoring nodes count at <storage-prefix>//minions
I0114 23:01:20.841462  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.841492  109940 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I0114 23:01:20.841617  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.841639  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.842139  109940 watch_cache.go:409] Replace watchCache (rev: 26361) 
I0114 23:01:20.842528  109940 store.go:1350] Monitoring pods count at <storage-prefix>//pods
I0114 23:01:20.842623  109940 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I0114 23:01:20.842755  109940 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.842891  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.842912  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.843420  109940 store.go:1350] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0114 23:01:20.843625  109940 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0114 23:01:20.843618  109940 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.843747  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.843766  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.844021  109940 watch_cache.go:409] Replace watchCache (rev: 26362) 
I0114 23:01:20.844112  109940 watch_cache.go:409] Replace watchCache (rev: 26362) 
I0114 23:01:20.845626  109940 watch_cache.go:409] Replace watchCache (rev: 26362) 
I0114 23:01:20.846684  109940 store.go:1350] Monitoring services count at <storage-prefix>//services/specs
I0114 23:01:20.846761  109940 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.846905  109940 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0114 23:01:20.846950  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.847166  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.847978  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.848005  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.848591  109940 watch_cache.go:409] Replace watchCache (rev: 26362) 
I0114 23:01:20.849174  109940 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.849316  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.849339  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.850134  109940 store.go:1350] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0114 23:01:20.850165  109940 rest.go:113] the default service ipfamily for this cluster is: IPv4
I0114 23:01:20.850693  109940 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.850846  109940 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0114 23:01:20.851257  109940 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.851688  109940 watch_cache.go:409] Replace watchCache (rev: 26363) 
I0114 23:01:20.852818  109940 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.853781  109940 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.854808  109940 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.855673  109940 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.856270  109940 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.856563  109940 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.856908  109940 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.857713  109940 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.858427  109940 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.858846  109940 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.859696  109940 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.862039  109940 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.862925  109940 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.863303  109940 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.864111  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.864435  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.864719  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.865136  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.865524  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.865784  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.866120  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.867155  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.867500  109940 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.868679  109940 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.869978  109940 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.870565  109940 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.870988  109940 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.871822  109940 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.872240  109940 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.873118  109940 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.874000  109940 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.874777  109940 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.875779  109940 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.876402  109940 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.876659  109940 master.go:488] Skipping disabled API group "auditregistration.k8s.io".
I0114 23:01:20.876794  109940 master.go:499] Enabling API group "authentication.k8s.io".
I0114 23:01:20.876898  109940 master.go:499] Enabling API group "authorization.k8s.io".
I0114 23:01:20.877153  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.877555  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.877668  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.878817  109940 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 23:01:20.878993  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.879134  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.879155  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.879269  109940 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 23:01:20.881030  109940 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 23:01:20.881199  109940 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 23:01:20.881712  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.881968  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.882079  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.882728  109940 watch_cache.go:409] Replace watchCache (rev: 26370) 
I0114 23:01:20.883244  109940 watch_cache.go:409] Replace watchCache (rev: 26370) 
I0114 23:01:20.883643  109940 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0114 23:01:20.883671  109940 master.go:499] Enabling API group "autoscaling".
I0114 23:01:20.883697  109940 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0114 23:01:20.883832  109940 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.884039  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.884073  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.884867  109940 store.go:1350] Monitoring jobs.batch count at <storage-prefix>//jobs
I0114 23:01:20.885019  109940 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0114 23:01:20.885060  109940 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.885788  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.885813  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.886810  109940 store.go:1350] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0114 23:01:20.886961  109940 master.go:499] Enabling API group "batch".
I0114 23:01:20.887243  109940 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.887494  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.887648  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.886839  109940 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0114 23:01:20.887824  109940 watch_cache.go:409] Replace watchCache (rev: 26371) 
I0114 23:01:20.888769  109940 store.go:1350] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0114 23:01:20.888799  109940 master.go:499] Enabling API group "certificates.k8s.io".
I0114 23:01:20.888907  109940 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0114 23:01:20.888963  109940 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.889309  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.889406  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.890222  109940 watch_cache.go:409] Replace watchCache (rev: 26371) 
I0114 23:01:20.890972  109940 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0114 23:01:20.891119  109940 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0114 23:01:20.891134  109940 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.891477  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.891574  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.891638  109940 watch_cache.go:409] Replace watchCache (rev: 26371) 
I0114 23:01:20.891960  109940 watch_cache.go:409] Replace watchCache (rev: 26371) 
I0114 23:01:20.893055  109940 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0114 23:01:20.893168  109940 master.go:499] Enabling API group "coordination.k8s.io".
I0114 23:01:20.893169  109940 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0114 23:01:20.893641  109940 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.893901  109940 watch_cache.go:409] Replace watchCache (rev: 26372) 
I0114 23:01:20.894278  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.894392  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.894664  109940 watch_cache.go:409] Replace watchCache (rev: 26372) 
I0114 23:01:20.895961  109940 store.go:1350] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I0114 23:01:20.895991  109940 master.go:499] Enabling API group "discovery.k8s.io".
I0114 23:01:20.896123  109940 reflector.go:188] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I0114 23:01:20.896491  109940 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.897592  109940 watch_cache.go:409] Replace watchCache (rev: 26373) 
I0114 23:01:20.897648  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.897670  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.899154  109940 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0114 23:01:20.899181  109940 master.go:499] Enabling API group "extensions".
I0114 23:01:20.899293  109940 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0114 23:01:20.899342  109940 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.899721  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.899815  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.900542  109940 watch_cache.go:409] Replace watchCache (rev: 26373) 
I0114 23:01:20.901566  109940 store.go:1350] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0114 23:01:20.901667  109940 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0114 23:01:20.901757  109940 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.901928  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.901951  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.902725  109940 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0114 23:01:20.902785  109940 master.go:499] Enabling API group "networking.k8s.io".
I0114 23:01:20.902802  109940 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0114 23:01:20.903017  109940 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.903116  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.903133  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.904115  109940 watch_cache.go:409] Replace watchCache (rev: 26373) 
I0114 23:01:20.904369  109940 store.go:1350] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0114 23:01:20.904396  109940 master.go:499] Enabling API group "node.k8s.io".
I0114 23:01:20.904571  109940 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.904694  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.904714  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.904792  109940 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0114 23:01:20.905625  109940 store.go:1350] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0114 23:01:20.905683  109940 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0114 23:01:20.905913  109940 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.906118  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.906183  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.908326  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.908571  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.909100  109940 store.go:1350] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0114 23:01:20.909122  109940 master.go:499] Enabling API group "policy".
I0114 23:01:20.909177  109940 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.909296  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.909342  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.909461  109940 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0114 23:01:20.910550  109940 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0114 23:01:20.910768  109940 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.911097  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.911120  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.911283  109940 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0114 23:01:20.912766  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.913404  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.914305  109940 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0114 23:01:20.914375  109940 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.914505  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.914530  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.914632  109940 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0114 23:01:20.915823  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.916369  109940 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0114 23:01:20.916466  109940 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0114 23:01:20.916565  109940 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.916712  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.916737  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.917716  109940 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0114 23:01:20.917781  109940 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.917898  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.917898  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.917917  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.918037  109940 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0114 23:01:20.919794  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.919840  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.920309  109940 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0114 23:01:20.920332  109940 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0114 23:01:20.922035  109940 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.922222  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.922246  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.923095  109940 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0114 23:01:20.923196  109940 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0114 23:01:20.923154  109940 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.923438  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.923455  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.924454  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.924561  109940 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0114 23:01:20.924724  109940 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0114 23:01:20.924804  109940 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.924932  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.924950  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.926139  109940 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0114 23:01:20.926201  109940 master.go:499] Enabling API group "rbac.authorization.k8s.io".
I0114 23:01:20.926209  109940 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0114 23:01:20.927555  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.927621  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.928223  109940 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.928935  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.928967  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.929016  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.929681  109940 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0114 23:01:20.929821  109940 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0114 23:01:20.929887  109940 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.930039  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.930070  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.930657  109940 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0114 23:01:20.930683  109940 master.go:499] Enabling API group "scheduling.k8s.io".
I0114 23:01:20.930754  109940 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0114 23:01:20.930799  109940 master.go:488] Skipping disabled API group "settings.k8s.io".
I0114 23:01:20.930974  109940 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.931095  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.931110  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.931120  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.931660  109940 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0114 23:01:20.931733  109940 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0114 23:01:20.932864  109940 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.933043  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.933072  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.933567  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.933749  109940 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0114 23:01:20.933849  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.933976  109940 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0114 23:01:20.933971  109940 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.934082  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.934097  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.934873  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.935075  109940 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0114 23:01:20.935156  109940 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0114 23:01:20.935263  109940 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.935382  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.935663  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.936214  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.940931  109940 store.go:1350] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0114 23:01:20.941305  109940 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.941449  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.941476  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.941587  109940 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0114 23:01:20.942805  109940 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0114 23:01:20.943509  109940 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0114 23:01:20.943979  109940 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.944131  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.944154  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.945318  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.945513  109940 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0114 23:01:20.945736  109940 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0114 23:01:20.946852  109940 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.947778  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.947802  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.947560  109940 watch_cache.go:409] Replace watchCache (rev: 26375) 
I0114 23:01:20.948430  109940 watch_cache.go:409] Replace watchCache (rev: 26374) 
I0114 23:01:20.949934  109940 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0114 23:01:20.949971  109940 master.go:499] Enabling API group "storage.k8s.io".
I0114 23:01:20.949989  109940 master.go:488] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I0114 23:01:20.950047  109940 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0114 23:01:20.951451  109940 watch_cache.go:409] Replace watchCache (rev: 26375) 
I0114 23:01:20.952453  109940 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.952596  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.952615  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.957805  109940 store.go:1350] Monitoring deployments.apps count at <storage-prefix>//deployments
I0114 23:01:20.957882  109940 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0114 23:01:20.958064  109940 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.958756  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.958797  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.959184  109940 watch_cache.go:409] Replace watchCache (rev: 26375) 
I0114 23:01:20.960009  109940 store.go:1350] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0114 23:01:20.960047  109940 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0114 23:01:20.960261  109940 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.960433  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.960454  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.961127  109940 store.go:1350] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0114 23:01:20.961432  109940 watch_cache.go:409] Replace watchCache (rev: 26375) 
I0114 23:01:20.961469  109940 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0114 23:01:20.961564  109940 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.961712  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.961746  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.962490  109940 watch_cache.go:409] Replace watchCache (rev: 26375) 
I0114 23:01:20.962692  109940 store.go:1350] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0114 23:01:20.962816  109940 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0114 23:01:20.962966  109940 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.963102  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.963126  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:20.989707  109940 watch_cache.go:409] Replace watchCache (rev: 26375) 
I0114 23:01:20.990180  109940 store.go:1350] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0114 23:01:20.991276  109940 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0114 23:01:20.992826  109940 master.go:499] Enabling API group "apps".
I0114 23:01:20.993082  109940 watch_cache.go:409] Replace watchCache (rev: 26376) 
I0114 23:01:20.993087  109940 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:20.994795  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:20.994850  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:21.002265  109940 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0114 23:01:21.003368  109940 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.003692  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:21.003558  109940 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0114 23:01:21.003722  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:21.005442  109940 watch_cache.go:409] Replace watchCache (rev: 26376) 
I0114 23:01:21.005992  109940 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0114 23:01:21.006432  109940 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0114 23:01:21.006654  109940 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.006818  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:21.006842  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:21.011394  109940 watch_cache.go:409] Replace watchCache (rev: 26376) 
I0114 23:01:21.015490  109940 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0114 23:01:21.015690  109940 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0114 23:01:21.017184  109940 watch_cache.go:409] Replace watchCache (rev: 26376) 
I0114 23:01:21.017895  109940 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.018318  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:21.018345  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:21.020735  109940 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0114 23:01:21.020790  109940 master.go:499] Enabling API group "admissionregistration.k8s.io".
I0114 23:01:21.020860  109940 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.021286  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:21.021490  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:21.021680  109940 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0114 23:01:21.025448  109940 watch_cache.go:409] Replace watchCache (rev: 26377) 
I0114 23:01:21.025920  109940 store.go:1350] Monitoring events count at <storage-prefix>//events
I0114 23:01:21.026194  109940 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I0114 23:01:21.026050  109940 master.go:499] Enabling API group "events.k8s.io".
I0114 23:01:21.027562  109940 watch_cache.go:409] Replace watchCache (rev: 26377) 
I0114 23:01:21.028760  109940 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.029311  109940 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.030785  109940 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.030936  109940 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.031066  109940 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.031189  109940 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.031406  109940 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.032297  109940 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.033019  109940 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.033890  109940 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.037331  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.037897  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.039113  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.039626  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.041476  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.042108  109940 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.044670  109940 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.045071  109940 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.046615  109940 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.047150  109940 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 23:01:21.047277  109940 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0114 23:01:21.048975  109940 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.049236  109940 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.049669  109940 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.050721  109940 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.051793  109940 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.053620  109940 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 23:01:21.053778  109940 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I0114 23:01:21.054899  109940 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.055367  109940 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.057302  109940 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.058570  109940 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.059006  109940 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.060112  109940 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 23:01:21.060255  109940 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0114 23:01:21.061437  109940 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.063919  109940 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.066224  109940 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.067324  109940 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.068632  109940 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.071659  109940 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.074657  109940 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.075577  109940 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.077598  109940 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.082018  109940 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.084478  109940 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 23:01:21.084631  109940 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0114 23:01:21.086273  109940 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.088271  109940 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 23:01:21.088422  109940 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0114 23:01:21.091504  109940 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.093148  109940 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.095569  109940 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.095954  109940 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.098118  109940 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.098833  109940 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.102691  109940 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.103349  109940 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 23:01:21.103431  109940 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0114 23:01:21.108777  109940 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.109836  109940 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.111825  109940 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.114700  109940 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.116253  109940 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.116654  109940 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.119152  109940 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.119649  109940 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.120119  109940 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.125451  109940 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.125943  109940 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.126689  109940 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 23:01:21.126864  109940 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0114 23:01:21.127423  109940 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0114 23:01:21.130426  109940 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.137896  109940 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.138883  109940 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.140321  109940 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0114 23:01:21.141638  109940 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"843745d5-c208-4242-9985-cd34bad3d93c", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0114 23:01:21.150396  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 23:01:21.150495  109940 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0114 23:01:21.150519  109940 shared_informer.go:206] Waiting for caches to sync for cluster_authentication_trust_controller
I0114 23:01:21.150778  109940 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0114 23:01:21.150801  109940 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I0114 23:01:21.151691  109940 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (460.098µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60884]
I0114 23:01:21.151987  109940 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.468305ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60882]
I0114 23:01:21.152369  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.152393  109940 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0114 23:01:21.152403  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.152413  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.152428  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.152452  109940 httplog.go:90] GET /healthz: (166.603µs) 0 [Go-http-client/1.1 127.0.0.1:60882]
I0114 23:01:21.153747  109940 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=26360 labels= fields= timeout=5m53s
I0114 23:01:21.154739  109940 httplog.go:90] GET /api/v1/services: (1.349316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0114 23:01:21.173015  109940 httplog.go:90] GET /api/v1/services: (2.033529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0114 23:01:21.181032  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.181070  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.181081  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.181100  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.181149  109940 httplog.go:90] GET /healthz: (240.152µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60894]
I0114 23:01:21.185535  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (4.611911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0114 23:01:21.189758  109940 httplog.go:90] POST /api/v1/namespaces: (3.736319ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0114 23:01:21.189999  109940 httplog.go:90] GET /api/v1/services: (5.061569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60894]
I0114 23:01:21.191367  109940 httplog.go:90] GET /api/v1/namespaces/kube-public: (860.896µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0114 23:01:21.193554  109940 httplog.go:90] POST /api/v1/namespaces: (1.81782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0114 23:01:21.196310  109940 httplog.go:90] GET /api/v1/services: (10.581031ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.197466  109940 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.102979ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60886]
I0114 23:01:21.199639  109940 httplog.go:90] POST /api/v1/namespaces: (1.753634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.250686  109940 shared_informer.go:236] caches populated
I0114 23:01:21.251070  109940 shared_informer.go:213] Caches are synced for cluster_authentication_trust_controller 
I0114 23:01:21.253198  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.253228  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.253241  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.253259  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.253315  109940 httplog.go:90] GET /healthz: (241.623µs) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:21.282215  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.282272  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.282284  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.282293  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.282321  109940 httplog.go:90] GET /healthz: (236.701µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.353207  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.353242  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.353256  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.353265  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.353297  109940 httplog.go:90] GET /healthz: (281.65µs) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:21.386328  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.386362  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.386374  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.386382  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.386437  109940 httplog.go:90] GET /healthz: (257.277µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.453278  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.453309  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.453322  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.453330  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.453358  109940 httplog.go:90] GET /healthz: (226.949µs) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:21.481939  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.481972  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.481984  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.481992  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.482028  109940 httplog.go:90] GET /healthz: (221.484µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.554549  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.554583  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.554594  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.554604  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.554633  109940 httplog.go:90] GET /healthz: (219.552µs) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:21.581929  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.581961  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.581973  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.581981  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.582007  109940 httplog.go:90] GET /healthz: (207.389µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.653209  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.653251  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.653262  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.653271  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.653311  109940 httplog.go:90] GET /healthz: (264.94µs) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:21.681897  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.681938  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.681950  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.681959  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.681987  109940 httplog.go:90] GET /healthz: (221.955µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.753228  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.753268  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.753279  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.753299  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.753337  109940 httplog.go:90] GET /healthz: (296.299µs) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:21.782389  109940 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0114 23:01:21.782424  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.782437  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.782446  109940 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.782475  109940 httplog.go:90] GET /healthz: (231.042µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.818429  109940 client.go:361] parsed scheme: "endpoint"
I0114 23:01:21.818518  109940 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 23:01:21.854234  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.854264  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.854274  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.854318  109940 httplog.go:90] GET /healthz: (1.28244ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:21.882948  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.882985  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.882995  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.883046  109940 httplog.go:90] GET /healthz: (1.262592ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:21.954189  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.954221  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.954231  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.954298  109940 httplog.go:90] GET /healthz: (1.237522ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:21.982919  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:21.982958  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:21.982969  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:21.983012  109940 httplog.go:90] GET /healthz: (1.208451ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.054745  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.054778  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:22.054789  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.054827  109940 httplog.go:90] GET /healthz: (1.805024ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:22.083783  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.083817  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:22.083844  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.083911  109940 httplog.go:90] GET /healthz: (2.040906ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.152416  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.507068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.152772  109940 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (1.868542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60894]
I0114 23:01:22.155184  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.155212  109940 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0114 23:01:22.155222  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.155265  109940 httplog.go:90] GET /healthz: (2.338912ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:22.155601  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.464377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60894]
I0114 23:01:22.156424  109940 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (3.195373ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.157079  109940 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0114 23:01:22.157896  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.788263ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60894]
I0114 23:01:22.158660  109940 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (1.366577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.159207  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (897.596µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60894]
I0114 23:01:22.160261  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (715.436µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60894]
I0114 23:01:22.161385  109940 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (1.954912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.161571  109940 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0114 23:01:22.161601  109940 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0114 23:01:22.161709  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.033939ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60894]
I0114 23:01:22.162934  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (779.274µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.165461  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (2.194124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.167336  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.32965ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.170127  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.111979ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.172457  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.943709ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.172715  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0114 23:01:22.174029  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.106468ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.176618  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.139479ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.176963  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0114 23:01:22.180234  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (2.372074ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.183374  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.683298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.184074  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0114 23:01:22.186107  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.186135  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.186180  109940 httplog.go:90] GET /healthz: (3.980456ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.187366  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (3.06165ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.191117  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.350024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.191359  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0114 23:01:22.192617  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.088088ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.195176  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.194399ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.195584  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0114 23:01:22.196843  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (886.61µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.199640  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.360314ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.199996  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0114 23:01:22.201595  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.363291ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.205123  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.901214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.205369  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0114 23:01:22.206508  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (912.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.208980  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.007412ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.209275  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0114 23:01:22.211743  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (2.260982ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.215555  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.93513ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.215987  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0114 23:01:22.217149  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (945.125µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.220052  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.872987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.220340  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0114 23:01:22.222688  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (2.12378ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.228415  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.222305ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.228658  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0114 23:01:22.229791  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (931.396µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.233189  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.934367ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.233574  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0114 23:01:22.234831  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.036593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.237939  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.656831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.238238  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0114 23:01:22.239589  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.128777ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.244610  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.329962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.244920  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0114 23:01:22.246647  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.499683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.248512  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.482394ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.248706  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0114 23:01:22.250289  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.359896ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.254654  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.815999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.254862  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0114 23:01:22.256073  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.256100  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.256133  109940 httplog.go:90] GET /healthz: (3.212061ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:22.256237  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.014707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.258326  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.585483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.258687  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0114 23:01:22.260218  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.187535ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.266259  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.308961ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.266662  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0114 23:01:22.268469  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.573302ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.272502  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.522499ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.272726  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0114 23:01:22.274002  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.020709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.276655  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.204569ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.277076  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0114 23:01:22.278512  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.197555ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.281046  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.125446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.281492  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0114 23:01:22.283424  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.674232ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.286496  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.286542  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.286575  109940 httplog.go:90] GET /healthz: (4.480418ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.287776  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.027633ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.287984  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0114 23:01:22.288967  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (831.332µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.291525  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.007877ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.291730  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0114 23:01:22.293610  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.27424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.295920  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.737838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.296294  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0114 23:01:22.297533  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.042595ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.300872  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.806729ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.301225  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0114 23:01:22.302708  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.204512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.305689  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.521438ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.306073  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0114 23:01:22.308110  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.115205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.310391  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.766294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.310608  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0114 23:01:22.311756  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (863.097µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.314377  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.152164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.314625  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0114 23:01:22.315681  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (877.755µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.318871  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.240286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.319146  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0114 23:01:22.320740  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.366851ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.323266  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.08706ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.323542  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0114 23:01:22.330247  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (6.429308ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.333926  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.09482ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.334377  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0114 23:01:22.335654  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (996.982µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.337806  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.743831ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.338028  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0114 23:01:22.338989  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (811.072µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.341218  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.838074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.341566  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0114 23:01:22.342872  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (956.498µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.345614  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.308781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.345852  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0114 23:01:22.346859  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (806.84µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.352694  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.380869ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.353587  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0114 23:01:22.355103  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.355134  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.355163  109940 httplog.go:90] GET /healthz: (1.775139ms) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:22.355177  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.301848ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.360595  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.829529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.360859  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0114 23:01:22.362846  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.808584ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.365519  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.245864ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.365741  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0114 23:01:22.368560  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (2.547463ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.371011  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.904167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.371268  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0114 23:01:22.372589  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.060061ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.377043  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.016889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.377348  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0114 23:01:22.378530  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (976.873µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.381319  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.284239ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.381600  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0114 23:01:22.383552  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.383576  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.383632  109940 httplog.go:90] GET /healthz: (1.907078ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.383674  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.751756ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.386438  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.026733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.386742  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0114 23:01:22.387880  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (951.196µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.389864  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.588128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.390563  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0114 23:01:22.391587  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (831.705µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.394582  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.496484ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.394767  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0114 23:01:22.396031  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.102898ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.398644  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.069179ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.398937  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0114 23:01:22.400139  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (968.727µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.402088  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.556909ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.402481  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0114 23:01:22.403519  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (811.756µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.406723  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.830643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.406934  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0114 23:01:22.408698  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.539751ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.410855  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.716927ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.411121  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0114 23:01:22.412245  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (950.978µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.414146  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.505632ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.414366  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0114 23:01:22.415308  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (775.725µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.418011  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.367451ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.418360  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0114 23:01:22.419928  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.322572ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.422249  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.811156ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.422561  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0114 23:01:22.423747  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (865.858µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.427498  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.046389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.427738  109940 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0114 23:01:22.432828  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (4.781614ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.436153  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.872127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.436466  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0114 23:01:22.438482  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.699694ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.453403  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.119988ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.454291  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.454323  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.454370  109940 httplog.go:90] GET /healthz: (901.142µs) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:22.454656  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0114 23:01:22.472234  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.302731ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.483073  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.483100  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.483134  109940 httplog.go:90] GET /healthz: (1.427729ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.493731  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.742466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.494018  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0114 23:01:22.512219  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.273823ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.534032  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.046754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.534392  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0114 23:01:22.552502  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.445634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.554663  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.554697  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.554732  109940 httplog.go:90] GET /healthz: (1.712506ms) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:22.573320  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.285813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.573571  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0114 23:01:22.583948  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.583989  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.584038  109940 httplog.go:90] GET /healthz: (1.713806ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.592005  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.098124ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.613757  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.347782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.614097  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0114 23:01:22.641267  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (5.457111ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.656579  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.648595ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.656808  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0114 23:01:22.657301  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.657323  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.657366  109940 httplog.go:90] GET /healthz: (1.023711ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:22.672469  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.535655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.682838  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.682869  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.682907  109940 httplog.go:90] GET /healthz: (1.16007ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.693818  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.863612ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.694115  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0114 23:01:22.712799  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.790341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.733501  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.40124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.733779  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0114 23:01:22.756488  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (5.443065ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.757507  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.757528  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.757570  109940 httplog.go:90] GET /healthz: (1.270049ms) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:22.776781  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.778077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.777385  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0114 23:01:22.783649  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.783677  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.783713  109940 httplog.go:90] GET /healthz: (1.88522ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.792603  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.257281ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.814720  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.720052ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.814984  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0114 23:01:22.833369  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.973952ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.853351  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.268215ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.853658  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0114 23:01:22.857205  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.857238  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.857278  109940 httplog.go:90] GET /healthz: (1.057822ms) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:22.872405  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.402377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.883332  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.883364  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.883413  109940 httplog.go:90] GET /healthz: (1.422529ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.893638  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.569132ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.893899  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0114 23:01:22.912834  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.078325ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.932841  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.886995ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.933090  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0114 23:01:22.954438  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (3.482931ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:22.954980  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.955003  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.955037  109940 httplog.go:90] GET /healthz: (2.078148ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:22.973680  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.658522ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.974014  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0114 23:01:22.982874  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:22.982907  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:22.982943  109940 httplog.go:90] GET /healthz: (1.142619ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:22.993951  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (2.958753ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.012927  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.925481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.013304  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0114 23:01:23.032207  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.232023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.053964  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.991874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.054319  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0114 23:01:23.057549  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.057581  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.057622  109940 httplog.go:90] GET /healthz: (4.318736ms) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:23.072450  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.409929ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.083197  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.083279  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.083358  109940 httplog.go:90] GET /healthz: (1.348104ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.094133  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.141205ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.094659  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0114 23:01:23.113938  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (2.636761ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.134046  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.988406ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.134321  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0114 23:01:23.152522  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.498978ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.154160  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.154204  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.154251  109940 httplog.go:90] GET /healthz: (1.090085ms) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:23.173642  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.562695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.173924  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0114 23:01:23.182930  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.182963  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.183014  109940 httplog.go:90] GET /healthz: (1.224761ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.195327  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (4.329221ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.214308  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.68116ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.214551  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0114 23:01:23.232427  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.380144ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.254024  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.985878ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.254281  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.254294  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0114 23:01:23.254302  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.254334  109940 httplog.go:90] GET /healthz: (1.096376ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:23.272485  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.518019ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.282863  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.282897  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.282933  109940 httplog.go:90] GET /healthz: (1.132925ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.293609  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.647298ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.293895  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0114 23:01:23.315022  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (4.053377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.333929  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.98822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.334393  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0114 23:01:23.352629  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.648373ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.355931  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.355967  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.356016  109940 httplog.go:90] GET /healthz: (2.862418ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:23.373254  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.743823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.373492  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0114 23:01:23.382644  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.382687  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.382723  109940 httplog.go:90] GET /healthz: (987.548µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.392423  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.233916ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.413707  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.50954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.413965  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0114 23:01:23.431963  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.032471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.453232  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.269797ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.453467  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0114 23:01:23.454746  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.454769  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.454806  109940 httplog.go:90] GET /healthz: (1.061548ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:23.475742  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (4.662384ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.482587  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.482616  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.482652  109940 httplog.go:90] GET /healthz: (869.25µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.493451  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.518446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.493691  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0114 23:01:23.512337  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.319349ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.534791  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.771142ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.535071  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0114 23:01:23.552698  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.258356ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.554112  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.554144  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.554182  109940 httplog.go:90] GET /healthz: (887.419µs) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:23.575729  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.726842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.575988  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0114 23:01:23.583765  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.583790  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.583829  109940 httplog.go:90] GET /healthz: (1.012952ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.594788  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (3.806039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.613974  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.674286ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.614262  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0114 23:01:23.633052  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.999412ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.654361  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.364469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.654605  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0114 23:01:23.657186  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.657213  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.657258  109940 httplog.go:90] GET /healthz: (4.327075ms) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:23.673267  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (2.251007ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.682839  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.682874  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.683227  109940 httplog.go:90] GET /healthz: (1.138253ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.693453  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.510446ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.693739  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0114 23:01:23.712610  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.397814ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.734441  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.401117ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.734896  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0114 23:01:23.752557  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.539518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.754230  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.754259  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.754305  109940 httplog.go:90] GET /healthz: (998.531µs) 0 [Go-http-client/1.1 127.0.0.1:32776]
I0114 23:01:23.774928  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.979986ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.775218  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0114 23:01:23.782852  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.782895  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.782927  109940 httplog.go:90] GET /healthz: (1.211938ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.793064  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.489399ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.813694  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.667775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.813976  109940 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0114 23:01:23.833143  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (2.135716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.834964  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.217988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.853120  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.119139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.853428  109940 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0114 23:01:23.936483  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.936524  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.936593  109940 httplog.go:90] GET /healthz: (83.615214ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:23.936742  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.936757  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.936783  109940 httplog.go:90] GET /healthz: (54.775335ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:23.938294  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (67.207492ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:32776]
I0114 23:01:23.940705  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.881825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.943353  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.888786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.943742  109940 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0114 23:01:23.946456  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (2.216203ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.948334  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.512887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.950391  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.68167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.950835  109940 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0114 23:01:23.951890  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (801.441µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.953805  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.497516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:23.954821  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.954841  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.954868  109940 httplog.go:90] GET /healthz: (1.732117ms) 0 [Go-http-client/1.1 127.0.0.1:33224]
I0114 23:01:23.974799  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.571005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:23.975175  109940 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0114 23:01:23.982849  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:23.982882  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:23.982928  109940 httplog.go:90] GET /healthz: (1.16175ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:23.992369  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.363255ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:23.994342  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.489718ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:24.013392  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.449734ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:24.013692  109940 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0114 23:01:24.032785  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.78577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:24.034724  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.340607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:24.053790  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.793832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:24.054080  109940 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0114 23:01:24.054216  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:24.054238  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:24.054267  109940 httplog.go:90] GET /healthz: (1.344085ms) 0 [Go-http-client/1.1 127.0.0.1:60896]
I0114 23:01:24.072714  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.720527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.074869  109940 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.36659ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.083177  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:24.083206  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:24.083247  109940 httplog.go:90] GET /healthz: (1.173341ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.093313  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.360824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.093586  109940 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0114 23:01:24.112399  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.384816ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.114355  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.502716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.134662  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.660761ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.134917  109940 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0114 23:01:24.152578  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.513391ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.155839  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:24.155912  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:24.155947  109940 httplog.go:90] GET /healthz: (1.439408ms) 0 [Go-http-client/1.1 127.0.0.1:33224]
I0114 23:01:24.156934  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.903803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.174118  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.043448ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.174465  109940 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0114 23:01:24.182949  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:24.182990  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:24.183091  109940 httplog.go:90] GET /healthz: (1.278554ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.192678  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.365101ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.194392  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.312172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.213366  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.328672ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.213773  109940 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0114 23:01:24.232426  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.421148ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.234299  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.391121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.253625  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.542971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.254698  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:24.254737  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:24.254771  109940 httplog.go:90] GET /healthz: (1.228974ms) 0 [Go-http-client/1.1 127.0.0.1:33224]
I0114 23:01:24.255094  109940 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0114 23:01:24.272344  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.345715ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.274674  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.874636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.283499  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:24.283541  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:24.283578  109940 httplog.go:90] GET /healthz: (1.844696ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.293325  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.347972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.293574  109940 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0114 23:01:24.312627  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.43395ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.314479  109940 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.340367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.333807  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.776164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.334337  109940 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0114 23:01:24.496478  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:24.496510  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:24.496553  109940 httplog.go:90] GET /healthz: (143.500687ms) 0 [Go-http-client/1.1 127.0.0.1:33224]
I0114 23:01:24.496691  109940 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0114 23:01:24.496712  109940 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I0114 23:01:24.496814  109940 httplog.go:90] GET /healthz: (114.516602ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33286]
I0114 23:01:24.496870  109940 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (145.838423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.500530  109940 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.457793ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.502854  109940 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (1.854035ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.503058  109940 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0114 23:01:24.557372  109940 httplog.go:90] GET /healthz: (1.112931ms) 200 [Go-http-client/1.1 127.0.0.1:60896]
W0114 23:01:24.558229  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 23:01:24.558255  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 23:01:24.558278  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 23:01:24.558323  109940 factory.go:174] Creating scheduler from algorithm provider 'DefaultProvider'
W0114 23:01:24.558413  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 23:01:24.558621  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 23:01:24.558632  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 23:01:24.558649  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 23:01:24.558710  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 23:01:24.559041  109940 reflector.go:153] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.559052  109940 reflector.go:188] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.560133  109940 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (823.772µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.560391  109940 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.560904  109940 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.561525  109940 get.go:251] Starting watch for /api/v1/services, rv=26362 labels= fields= timeout=8m29s
I0114 23:01:24.561599  109940 reflector.go:153] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.561612  109940 reflector.go:188] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.561687  109940 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (397.646µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.561924  109940 reflector.go:153] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.561941  109940 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.563458  109940 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (326.627µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.564031  109940 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=26374 labels= fields= timeout=9m16s
I0114 23:01:24.564432  109940 reflector.go:153] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.564451  109940 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.564667  109940 reflector.go:153] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.564682  109940 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.565386  109940 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (296.27µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33320]
I0114 23:01:24.565605  109940 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (411.95µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33322]
I0114 23:01:24.565840  109940 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=26374 labels= fields= timeout=5m27s
I0114 23:01:24.566081  109940 reflector.go:153] Starting reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.566111  109940 reflector.go:188] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.566850  109940 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=26360 labels= fields= timeout=6m4s
I0114 23:01:24.566927  109940 httplog.go:90] GET /apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: (302.112µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33324]
I0114 23:01:24.567425  109940 get.go:251] Starting watch for /apis/storage.k8s.io/v1/csinodes, rv=26375 labels= fields= timeout=8m58s
I0114 23:01:24.567616  109940 get.go:251] Starting watch for /api/v1/nodes, rv=26362 labels= fields= timeout=6m4s
I0114 23:01:24.568268  109940 reflector.go:153] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.568285  109940 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I0114 23:01:24.569715  109940 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (977.552µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33318]
I0114 23:01:24.570406  109940 get.go:251] Starting watch for /api/v1/pods, rv=26362 labels= fields= timeout=6m8s
I0114 23:01:24.570559  109940 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (333.294µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33318]
I0114 23:01:24.571048  109940 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=26360 labels= fields= timeout=8m27s
I0114 23:01:24.583507  109940 httplog.go:90] GET /healthz: (933.878µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.585189  109940 httplog.go:90] GET /api/v1/namespaces/default: (1.267119ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.589099  109940 httplog.go:90] POST /api/v1/namespaces: (3.514782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.590813  109940 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.332131ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.595585  109940 httplog.go:90] POST /api/v1/namespaces/default/services: (4.400227ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.599192  109940 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.159854ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.602970  109940 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (3.339765ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.659284  109940 shared_informer.go:236] caches populated
I0114 23:01:24.659336  109940 shared_informer.go:236] caches populated
I0114 23:01:24.659343  109940 shared_informer.go:236] caches populated
I0114 23:01:24.659348  109940 shared_informer.go:236] caches populated
I0114 23:01:24.659353  109940 shared_informer.go:236] caches populated
I0114 23:01:24.659368  109940 shared_informer.go:236] caches populated
I0114 23:01:24.659373  109940 shared_informer.go:236] caches populated
I0114 23:01:24.659377  109940 shared_informer.go:236] caches populated
I0114 23:01:24.659493  109940 shared_informer.go:236] caches populated
I0114 23:01:24.663138  109940 httplog.go:90] POST /api/v1/nodes: (3.031101ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.663585  109940 node_tree.go:86] Added node "test-node-0" in group "" to NodeTree
I0114 23:01:24.665948  109940 httplog.go:90] POST /api/v1/nodes: (2.373999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.668292  109940 httplog.go:90] POST /api/v1/nodes: (1.918552ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.670331  109940 httplog.go:90] POST /api/v1/nodes: (1.640323ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.672603  109940 node_tree.go:86] Added node "test-node-1" in group "" to NodeTree
I0114 23:01:24.672634  109940 node_tree.go:86] Added node "test-node-2" in group "" to NodeTree
I0114 23:01:24.672651  109940 node_tree.go:86] Added node "test-node-3" in group "" to NodeTree
I0114 23:01:24.674460  109940 httplog.go:90] POST /api/v1/nodes: (3.729468ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.676712  109940 node_tree.go:86] Added node "test-node-4" in group "" to NodeTree
I0114 23:01:24.677939  109940 httplog.go:90] POST /api/v1/nodes: (2.938386ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.678472  109940 node_tree.go:86] Added node "test-node-5" in group "" to NodeTree
I0114 23:01:24.681243  109940 httplog.go:90] POST /api/v1/nodes: (2.260177ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.681709  109940 node_tree.go:86] Added node "test-node-6" in group "" to NodeTree
I0114 23:01:24.686079  109940 httplog.go:90] POST /api/v1/nodes: (3.75252ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.686949  109940 node_tree.go:86] Added node "test-node-7" in group "" to NodeTree
I0114 23:01:24.691525  109940 httplog.go:90] POST /api/v1/nodes: (4.335996ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.692691  109940 node_tree.go:86] Added node "test-node-8" in group "" to NodeTree
I0114 23:01:24.697114  109940 httplog.go:90] POST /api/v1/nodes: (4.933185ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.697480  109940 node_tree.go:86] Added node "test-node-9" in group "" to NodeTree
I0114 23:01:24.702844  109940 httplog.go:90] POST /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods: (4.641957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.703695  109940 scheduling_queue.go:839] About to try and schedule pod score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod
I0114 23:01:24.703724  109940 scheduler.go:562] Attempting to schedule pod: score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod
W0114 23:01:24.704466  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 23:01:24.704510  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0114 23:01:24.704523  109940 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 23:01:24.704795  109940 scheduler_binder.go:278] AssumePodVolumes for pod "score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod", node "test-node-0"
I0114 23:01:24.704813  109940 scheduler_binder.go:288] AssumePodVolumes for pod "score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod", node "test-node-0": all PVCs bound and nothing to do
I0114 23:01:24.704883  109940 factory.go:488] Attempting to bind test-pod to test-node-0
I0114 23:01:24.707350  109940 httplog.go:90] POST /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod/binding: (2.092415ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.707734  109940 scheduler.go:704] pod score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod is bound successfully on node "test-node-0", 10 nodes evaluated, 10 nodes were found feasible.
I0114 23:01:24.710326  109940 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/events: (2.250903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.806609  109940 httplog.go:90] GET /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (1.721298ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.809062  109940 httplog.go:90] GET /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (1.957667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.822768  109940 httplog.go:90] DELETE /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (13.174843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.827886  109940 httplog.go:90] GET /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (2.317041ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.830261  109940 httplog.go:90] POST /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods: (1.83578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.830820  109940 scheduling_queue.go:839] About to try and schedule pod score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod
I0114 23:01:24.830843  109940 scheduler.go:562] Attempting to schedule pod: score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod
E0114 23:01:24.831152  109940 framework.go:532] error while running score plugin for pod "test-pod": injecting failure for pod test-pod
E0114 23:01:24.831192  109940 factory.go:438] Error scheduling score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod: error while running score plugin for pod "test-pod": injecting failure for pod test-pod; retrying
I0114 23:01:24.831225  109940 scheduler.go:741] Updating pod condition for score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod to (PodScheduled==False, Reason=Unschedulable)
I0114 23:01:24.835812  109940 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/events: (2.497654ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33364]
I0114 23:01:24.837599  109940 httplog.go:90] PUT /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod/status: (3.20839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
E0114 23:01:24.837868  109940 scheduler.go:593] error selecting node for pod: error while running score plugin for pod "test-pod": injecting failure for pod test-pod
I0114 23:01:24.838275  109940 scheduling_queue.go:839] About to try and schedule pod score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod
I0114 23:01:24.838291  109940 scheduler.go:562] Attempting to schedule pod: score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod
E0114 23:01:24.838596  109940 framework.go:532] error while running score plugin for pod "test-pod": injecting failure for pod test-pod
E0114 23:01:24.838621  109940 factory.go:438] Error scheduling score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod: error while running score plugin for pod "test-pod": injecting failure for pod test-pod; retrying
I0114 23:01:24.838642  109940 scheduler.go:741] Updating pod condition for score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod to (PodScheduled==False, Reason=Unschedulable)
E0114 23:01:24.838655  109940 scheduler.go:593] error selecting node for pod: error while running score plugin for pod "test-pod": injecting failure for pod test-pod
I0114 23:01:24.840956  109940 httplog.go:90] GET /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (1.897926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33364]
I0114 23:01:24.841721  109940 httplog.go:90] GET /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (7.745074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33362]
E0114 23:01:24.842048  109940 factory.go:463] pod: score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod is already present in unschedulable queue
I0114 23:01:24.842469  109940 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/events: (3.103329ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33332]
I0114 23:01:24.932930  109940 httplog.go:90] GET /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (1.550748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33362]
I0114 23:01:24.939588  109940 scheduling_queue.go:839] About to try and schedule pod score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod
I0114 23:01:24.939632  109940 scheduler.go:722] Skip schedule deleting pod: score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod
I0114 23:01:24.942340  109940 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/events: (2.139781ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33364]
I0114 23:01:24.943003  109940 httplog.go:90] DELETE /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (7.166143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33362]
I0114 23:01:24.946418  109940 httplog.go:90] GET /api/v1/namespaces/score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/pods/test-pod: (1.059188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33362]
I0114 23:01:24.947230  109940 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=26362&timeout=6m8s&timeoutSeconds=368&watch=true: (376.988853ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33328]
I0114 23:01:24.947388  109940 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=26374&timeout=9m16s&timeoutSeconds=556&watch=true: (383.496123ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60896]
I0114 23:01:24.947506  109940 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=26374&timeout=5m27s&timeoutSeconds=327&watch=true: (381.802049ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33320]
I0114 23:01:24.947596  109940 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=26360&timeout=6m4s&timeoutSeconds=364&watch=true: (380.9389ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33326]
I0114 23:01:24.947625  109940 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=26362&timeout=8m29s&timeoutSeconds=509&watch=true: (386.368608ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33224]
I0114 23:01:24.947630  109940 httplog.go:90] GET /apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=26375&timeout=8m58s&timeoutSeconds=538&watch=true: (380.327349ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33324]
I0114 23:01:24.947727  109940 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=26360&timeout=8m27s&timeoutSeconds=507&watch=true: (376.849897ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33318]
I0114 23:01:24.950011  109940 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=26362&timeout=6m4s&timeoutSeconds=364&watch=true: (384.069843ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33322]
I0114 23:01:24.989834  109940 httplog.go:90] DELETE /api/v1/nodes: (42.761542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33362]
I0114 23:01:24.990117  109940 controller.go:180] Shutting down kubernetes service endpoint reconciler
I0114 23:01:24.992256  109940 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.883401ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33362]
I0114 23:01:24.995591  109940 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.746217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:33362]
I0114 23:01:24.996201  109940 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0114 23:01:24.996374  109940 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=26360&timeout=5m53s&timeoutSeconds=353&watch=true: (3.84285426s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60882]
--- FAIL: TestScorePlugin (4.18s)
    framework_test.go:580: Expected the pod to be scheduled on node "test-node-1", got "test-node-0"

				from junit_20200114-225458.xml

Find score-plugin3e3baeaf-9cb9-4ed3-b158-3579954e4018/test-pod mentions in log files | View test history on testgrid


Show 2610 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 56 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0114 22:44:20] Call tree:
!!! [0114 22:44:20]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0114 22:44:20]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0114 22:44:20]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0114 22:44:20]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0114 22:44:20]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0114 22:44:20] Running kubeadm tests
+++ [0114 22:44:27] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0114 22:45:15] Running tests without code coverage
{"Time":"2020-01-14T22:46:41.896208159Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t44.987s\n"}
✓  cmd/kubeadm/test/cmd (44.987s)
... skipping 302 lines ...
+++ [0114 22:48:42] Building kube-controller-manager
+++ [0114 22:48:48] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0114 22:49:22] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0114 22:49:22.956127   54738 serving.go:313] Generated self-signed cert in-memory
W0114 22:49:23.456684   54738 authentication.go:409] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0114 22:49:23.456742   54738 authentication.go:267] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0114 22:49:23.456753   54738 authentication.go:291] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0114 22:49:23.456774   54738 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0114 22:49:23.456812   54738 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0114 22:49:23.456841   54738 controllermanager.go:161] Version: v1.18.0-alpha.1.684+8453ebab2786c8
I0114 22:49:23.458370   54738 secure_serving.go:178] Serving securely on [::]:10257
I0114 22:49:23.458521   54738 tlsconfig.go:241] Starting DynamicServingCertificateController
I0114 22:49:23.458902   54738 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0114 22:49:23.458984   54738 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 41 lines ...
I0114 22:49:23.734590   54738 controllermanager.go:533] Started "deployment"
I0114 22:49:23.734610   54738 job_controller.go:143] Starting job controller
I0114 22:49:23.734621   54738 shared_informer.go:206] Waiting for caches to sync for job
I0114 22:49:23.734748   54738 deployment_controller.go:152] Starting deployment controller
I0114 22:49:23.734774   54738 shared_informer.go:206] Waiting for caches to sync for deployment
I0114 22:49:23.737384   54738 controllermanager.go:533] Started "ttl"
E0114 22:49:23.737842   54738 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0114 22:49:23.737859   54738 controllermanager.go:525] Skipping "service"
I0114 22:49:23.742215   54738 ttl_controller.go:116] Starting TTL controller
I0114 22:49:23.742242   54738 shared_informer.go:206] Waiting for caches to sync for TTL
I0114 22:49:23.742804   54738 controllermanager.go:533] Started "persistentvolume-binder"
W0114 22:49:23.743325   54738 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:49:23.743763   54738 pv_controller_base.go:294] Starting persistent volume controller
... skipping 45 lines ...
I0114 22:49:23.996303   54738 shared_informer.go:206] Waiting for caches to sync for certificate-csrapproving
I0114 22:49:23.996429   54738 controllermanager.go:533] Started "csrcleaner"
W0114 22:49:23.996442   54738 controllermanager.go:512] "bootstrapsigner" is disabled
W0114 22:49:23.996448   54738 controllermanager.go:512] "tokencleaner" is disabled
I0114 22:49:23.996459   54738 cleaner.go:81] Starting CSR cleaner controller
I0114 22:49:23.996721   54738 node_lifecycle_controller.go:77] Sending events to api server
E0114 22:49:23.996755   54738 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0114 22:49:23.996765   54738 controllermanager.go:525] Skipping "cloud-node-lifecycle"
W0114 22:49:23.996778   54738 controllermanager.go:525] Skipping "ttl-after-finished"
W0114 22:49:24.005921   54738 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0114 22:49:24.006002   54738 controllermanager.go:533] Started "namespace"
I0114 22:49:24.006125   54738 namespace_controller.go:200] Starting namespace controller
I0114 22:49:24.006149   54738 shared_informer.go:206] Waiting for caches to sync for namespace
... skipping 69 lines ...
I0114 22:49:24.619573   54738 shared_informer.go:213] Caches are synced for ClusterRoleAggregator 
+++ command: run_kubectl_version_tests
I0114 22:49:24.623389   54738 shared_informer.go:213] Caches are synced for PV protection 
I0114 22:49:24.625028   54738 shared_informer.go:213] Caches are synced for ReplicationController 
I0114 22:49:24.625488   54738 shared_informer.go:213] Caches are synced for ReplicaSet 
I0114 22:49:24.632449   54738 shared_informer.go:213] Caches are synced for HPA 
E0114 22:49:24.633948   54738 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0114 22:49:24.634793   54738 shared_informer.go:213] Caches are synced for job 
+++ [0114 22:49:24] Testing kubectl version
E0114 22:49:24.642224   54738 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
{
  "major": "1",
  "minor": "18+",
  "gitVersion": "v1.18.0-alpha.1.684+8453ebab2786c8",
  "gitCommit": "8453ebab2786c8629ce5e3b25440abc70a107b1f",
  "gitTreeState": "clean",
... skipping 9 lines ...
I0114 22:49:24.934275   54738 shared_informer.go:213] Caches are synced for endpoint 
I0114 22:49:24.935034   54738 shared_informer.go:213] Caches are synced for deployment 
Successful: the flag '--client' shows correct client info
(BSuccessful: the flag '--client' correctly has no server version info
(B+++ [0114 22:49:25] Testing kubectl version: verify json output
I0114 22:49:25.132775   54738 shared_informer.go:213] Caches are synced for expand 
W0114 22:49:25.177333   54738 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0114 22:49:25.195800   54738 shared_informer.go:213] Caches are synced for resource quota 
I0114 22:49:25.216319   54738 shared_informer.go:213] Caches are synced for garbage collector 
I0114 22:49:25.216350   54738 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0114 22:49:25.219279   54738 shared_informer.go:213] Caches are synced for attach detach 
I0114 22:49:25.220241   54738 shared_informer.go:213] Caches are synced for PVC protection 
I0114 22:49:25.220267   54738 shared_informer.go:213] Caches are synced for GC 
... skipping 65 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0114 22:49:28] Creating namespace namespace-1579042168-32683
namespace/namespace-1579042168-32683 created
Context "test" modified.
+++ [0114 22:49:29] Testing RESTMapper
+++ [0114 22:49:29] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 601 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 12 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 188 lines ...
(Bpod/valid-pod patched
core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0114 22:50:14] "kubectl patch with resourceVersion 533" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0114 22:50:15.949384   54738 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test replaced
core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(Bnode "node-v1-test" deleted
core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
(BEdit cancelled, no changes made.
... skipping 22 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 85 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0114 22:50:28] Creating namespace namespace-1579042228-27465
namespace/namespace-1579042228-27465 created
Context "test" modified.
+++ [0114 22:50:28] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0114 22:50:28] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0114 22:50:31.691513   51297 client.go:361] parsed scheme: "endpoint"
I0114 22:50:31.691566   51297 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0114 22:50:31.697850   51297 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 102 lines ...
Context "test" modified.
+++ [0114 22:50:34] Testing kubectl create filter
create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 30 lines ...
I0114 22:50:38.172305   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042235-31634", Name:"nginx-8484dd655", UID:"c2237c74-9670-4c73-b746-bc8f15ca209e", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-djbn5
I0114 22:50:38.177465   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042235-31634", Name:"nginx-8484dd655", UID:"c2237c74-9670-4c73-b746-bc8f15ca209e", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-jxmwg
I0114 22:50:38.177970   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042235-31634", Name:"nginx-8484dd655", UID:"c2237c74-9670-4c73-b746-bc8f15ca209e", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-h6kdq
apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
(BI0114 22:50:42.304931   54738 horizontal.go:353] Horizontal Pod Autoscaler frontend has been deleted in namespace-1579042224-12404
Successful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1579042235-31634\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1579042235-31634"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0114 22:50:47.782907   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042235-31634", Name:"nginx", UID:"56463204-1a10-41b7-bb35-d4aec438bc1c", APIVersion:"apps/v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I0114 22:50:47.785320   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042235-31634", Name:"nginx-668b6c7744", UID:"a0ea9487-82b7-468e-bf7c-5acc15078dfb", APIVersion:"apps/v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-xlzqz
I0114 22:50:47.788138   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042235-31634", Name:"nginx-668b6c7744", UID:"a0ea9487-82b7-468e-bf7c-5acc15078dfb", APIVersion:"apps/v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-h82sc
I0114 22:50:47.790344   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042235-31634", Name:"nginx-668b6c7744", UID:"a0ea9487-82b7-468e-bf7c-5acc15078dfb", APIVersion:"apps/v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-4457q
Successful
... skipping 141 lines ...
+++ [0114 22:50:55] Creating namespace namespace-1579042255-11386
namespace/namespace-1579042255-11386 created
Context "test" modified.
+++ [0114 22:50:55] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1579042255-11386 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1579042255-11386 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0114 22:50:57.888848   65162 loader.go:375] Config loaded from file:  /tmp/tmp.xWFr6Ievoq/.kube/config
I0114 22:50:57.890321   65162 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0114 22:50:57.919464   65162 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
I0114 22:50:57.921344   65162 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 479 lines ...
Successful
message:NAME    DATA   AGE
one     0      0s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [0114 22:51:04] Creating namespace namespace-1579042264-8346
namespace/namespace-1579042264-8346 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-01-14T22:51:05Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1579042264-8346", "resourceVersion":"756", "selfLink":"/api/v1/namespaces/namespace-1579042264-8346/pods/valid-pod", "uid":"9f6f35b8-b037-48bb-8a9c-9e50b8963d40"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-01-14T22:51:05Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1579042264-8346","resourceVersion":"756","selfLink":"/api/v1/namespaces/namespace-1579042264-8346/pods/valid-pod","uid":"9f6f35b8-b037-48bb-8a9c-9e50b8963d40"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-01-14T22:51:05Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1579042264-8346 resourceVersion:756 selfLink:/api/v1/namespaces/namespace-1579042264-8346/pods/valid-pod uid:9f6f35b8-b037-48bb-8a9c-9e50b8963d40] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 45 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 42 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [0114 22:51:11] Creating namespace namespace-1579042271-11417
namespace/namespace-1579042271-11417 created
Context "test" modified.
+++ [0114 22:51:11] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [0114 22:51:12] Creating namespace namespace-1579042272-9629
namespace/namespace-1579042272-9629 created
Context "test" modified.
+++ [0114 22:51:12] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0114 22:51:12.999230   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042272-9629", Name:"frontend", UID:"3445a291-a51f-4a48-a2ba-f4405b198319", APIVersion:"apps/v1", ResourceVersion:"814", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l9xx8
I0114 22:51:13.004936   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042272-9629", Name:"frontend", UID:"3445a291-a51f-4a48-a2ba-f4405b198319", APIVersion:"apps/v1", ResourceVersion:"814", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8pqpv
I0114 22:51:13.005320   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042272-9629", Name:"frontend", UID:"3445a291-a51f-4a48-a2ba-f4405b198319", APIVersion:"apps/v1", ResourceVersion:"814", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-z8qbz
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-l9xx8 does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-l9xx8 does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"903e82c3-1204-456c-a1b8-674f99f90757","resourceVersion":"834","creationTimestamp":"2020-01-14T22:51:14Z"}}
... skipping 2 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"903e82c3-1204-456c-a1b8-674f99f90757","resourceVersion":"837","creationTimestamp":"2020-01-14T22:51:14Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"903e82c3-1204-456c-a1b8-674f99f90757"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0114 22:51:26] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 194 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
Recording: run_cmd_with_img_tests
... skipping 11 lines ...
I0114 22:51:43.215762   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042302-7487", Name:"test1-6cdffdb5b8", UID:"cef84992-221e-4541-a824-1c4068aa9a8e", APIVersion:"apps/v1", ResourceVersion:"1001", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-j8hvc
Successful
message:deployment.apps/test1 created
has:deployment.apps/test1 created
deployment.apps "test1" deleted
W0114 22:51:43.392841   51297 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0114 22:51:43.394203   54738 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
+++ [0114 22:51:43] Testing recursive resources
+++ [0114 22:51:43] Creating namespace namespace-1579042303-25628
W0114 22:51:43.521925   51297 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0114 22:51:43.523176   54738 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042303-25628 created
Context "test" modified.
W0114 22:51:43.641681   51297 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0114 22:51:43.642934   54738 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0114 22:51:43.769095   51297 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E0114 22:51:43.770466   54738 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:44.395457   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:51:44.524322   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:44.648386   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:44.771803   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1579042303-25628
Priority:     0
Node:         <none>
... skipping 155 lines ...
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:45.396893   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:51:45.525553   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:45.649732   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:45.773038   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0114 22:51:46.303301   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042303-25628", Name:"nginx", UID:"d3eedc15-bb73-4d91-9565-80499084a080", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I0114 22:51:46.309301   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042303-25628", Name:"nginx-f87d999f7", UID:"e5c01c90-269b-4ef7-8040-acf6881a5cb4", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-fm2vf
I0114 22:51:46.311643   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042303-25628", Name:"nginx-f87d999f7", UID:"e5c01c90-269b-4ef7-8040-acf6881a5cb4", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-xqs6p
I0114 22:51:46.314045   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042303-25628", Name:"nginx-f87d999f7", UID:"e5c01c90-269b-4ef7-8040-acf6881a5cb4", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-kmq54
E0114 22:51:46.397948   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE0114 22:51:46.526812   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bkubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
E0114 22:51:46.650752   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
(BSuccessful
message:apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 32 lines ...
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:extensions/v1beta1
E0114 22:51:46.774366   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BI0114 22:51:47.063000   54738 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:47.399485   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:47.528409   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:51:47.652119   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:47.775791   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:51:48.401106   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:48.529703   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:48.654031   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0114 22:51:48.777203   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0114 22:51:49.084280   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042303-25628", Name:"busybox0", UID:"97da916f-cbab-4045-be9e-98e84e384e39", APIVersion:"v1", ResourceVersion:"1058", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-fkcp5
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 22:51:49.089845   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042303-25628", Name:"busybox1", UID:"6b9624aa-8c73-485d-8eab-ed12e21caa1f", APIVersion:"v1", ResourceVersion:"1060", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-kf4z2
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:49.402544   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(BE0114 22:51:49.531062   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE0114 22:51:49.655497   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:49.778435   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0114 22:51:50.404736   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE0114 22:51:50.532417   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:50.656872   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:50.779773   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0114 22:51:51.393881   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042303-25628", Name:"busybox0", UID:"97da916f-cbab-4045-be9e-98e84e384e39", APIVersion:"v1", ResourceVersion:"1080", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-65jrt
E0114 22:51:51.405771   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:51:51.408435   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042303-25628", Name:"busybox1", UID:"6b9624aa-8c73-485d-8eab-ed12e21caa1f", APIVersion:"v1", ResourceVersion:"1084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-5d7nx
E0114 22:51:51.533946   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(BE0114 22:51:51.658503   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0114 22:51:51.781117   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0114 22:51:52.387835   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042303-25628", Name:"nginx1-deployment", UID:"38f1ea95-31be-4c70-81ea-f00e9de9f991", APIVersion:"apps/v1", ResourceVersion:"1101", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 22:51:52.392865   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042303-25628", Name:"nginx1-deployment-7bdbbfb5cf", UID:"14f8f11e-a494-433f-b954-acb790fb0194", APIVersion:"apps/v1", ResourceVersion:"1102", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-q782h
I0114 22:51:52.396413   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042303-25628", Name:"nginx1-deployment-7bdbbfb5cf", UID:"14f8f11e-a494-433f-b954-acb790fb0194", APIVersion:"apps/v1", ResourceVersion:"1102", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-m8lhp
I0114 22:51:52.396563   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042303-25628", Name:"nginx0-deployment", UID:"58a26573-7239-4704-9ab0-b2b8eca281a3", APIVersion:"apps/v1", ResourceVersion:"1103", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
I0114 22:51:52.399405   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042303-25628", Name:"nginx0-deployment-57c6bff7f6", UID:"c740d232-1573-4385-8ae0-2a4a0145c549", APIVersion:"apps/v1", ResourceVersion:"1107", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-bd44k
I0114 22:51:52.404148   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042303-25628", Name:"nginx0-deployment-57c6bff7f6", UID:"c740d232-1573-4385-8ae0-2a4a0145c549", APIVersion:"apps/v1", ResourceVersion:"1107", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-sf2fq
E0114 22:51:52.406928   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:52.535210   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(BE0114 22:51:52.660059   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BE0114 22:51:52.782492   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment resumed
deployment.apps/nginx0-deployment resumed
E0114 22:51:53.408787   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0114 22:51:53.536929   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0114 22:51:53.661518   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
E0114 22:51:53.783852   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:54.410203   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:54.538120   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:54.662687   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:54.785449   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0114 22:51:55.077038   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042303-25628", Name:"busybox0", UID:"9a8abe38-be20-4f6a-b67b-b7e99a0c8d08", APIVersion:"v1", ResourceVersion:"1153", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-lxgsq
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0114 22:51:55.084366   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042303-25628", Name:"busybox1", UID:"7b01f49d-e007-49d0-b977-8481d59783ad", APIVersion:"v1", ResourceVersion:"1155", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-st5t2
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:no rollbacker has been implemented for "ReplicationController"
Successful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0114 22:51:55.411621   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
E0114 22:51:55.539581   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
E0114 22:51:55.664213   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
E0114 22:51:55.787037   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:56.413443   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:56.541110   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:56.665854   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
E0114 22:51:56.788644   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [0114 22:51:56] Testing kubectl(v1:namespaces)
namespace/my-namespace created
core.sh:1314: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
I0114 22:51:57.205250   54738 shared_informer.go:206] Waiting for caches to sync for resource quota
I0114 22:51:57.205307   54738 shared_informer.go:213] Caches are synced for resource quota 
E0114 22:51:57.414786   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:57.542486   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:57.667146   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:51:57.723102   54738 shared_informer.go:206] Waiting for caches to sync for garbage collector
I0114 22:51:57.723160   54738 shared_informer.go:213] Caches are synced for garbage collector 
E0114 22:51:57.790159   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:58.416257   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:58.543826   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:58.668821   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:58.791568   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:59.417630   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:59.545218   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:59.670408   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:51:59.793323   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:00.419011   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:00.546551   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:00.672543   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:00.794867   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:01.420475   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:01.547547   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:01.673802   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:01.796467   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
E0114 22:52:02.421726   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace created
E0114 22:52:02.549054   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1323: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BE0114 22:52:02.675201   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:02.797481   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1579042166-755" deleted
namespace "namespace-1579042168-32683" deleted
... skipping 26 lines ...
namespace "namespace-1579042276-29776" deleted
namespace "namespace-1579042277-10694" deleted
namespace "namespace-1579042279-11873" deleted
namespace "namespace-1579042281-8634" deleted
namespace "namespace-1579042302-7487" deleted
namespace "namespace-1579042303-25628" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1579042166-755" deleted
... skipping 27 lines ...
namespace "namespace-1579042276-29776" deleted
namespace "namespace-1579042277-10694" deleted
namespace "namespace-1579042279-11873" deleted
namespace "namespace-1579042281-8634" deleted
namespace "namespace-1579042302-7487" deleted
namespace "namespace-1579042303-25628" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
core.sh:1335: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1339: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1343: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:52:03.422987   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/valid-pod created
core.sh:1347: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BE0114 22:52:03.550288   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1349: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BE0114 22:52:03.676471   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
E0114 22:52:03.798752   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1356: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1360: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
E0114 22:52:04.424357   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:04.551530   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:04.677612   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:04.682310   54738 horizontal.go:353] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1579042303-25628
I0114 22:52:04.685914   54738 horizontal.go:353] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1579042303-25628
E0114 22:52:04.799823   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:05.425602   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:05.552899   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:05.678838   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:05.801126   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:06.426880   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:06.554303   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:06.680237   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:06.802383   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:07.428221   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:07.555676   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:07.681567   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:07.806018   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:08.430605   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:08.557316   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:08.682405   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:08.807326   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_secrets_test
Running command: run_secrets_test

+++ Running case: test-cmd.run_secrets_test 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_secrets_test
+++ [0114 22:52:09] Creating namespace namespace-1579042329-16209
E0114 22:52:09.432465   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042329-16209 created
Context "test" modified.
+++ [0114 22:52:09] Testing secrets
E0114 22:52:09.558952   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:09.597630   71632 loader.go:375] Config loaded from file:  /tmp/tmp.xWFr6Ievoq/.kube/config
Successful
message:apiVersion: v1
data:
  key1: dmFsdWUx
kind: Secret
... skipping 25 lines ...
  key1: dmFsdWUx
kind: Secret
metadata:
  creationTimestamp: null
  name: test
has not:example.com
E0114 22:52:09.683720   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:725: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
(Bnamespace/test-secrets created
E0114 22:52:09.808726   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
(Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
(BE0114 22:52:10.434931   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E0114 22:52:10.560402   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:52:10.685160   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
E0114 22:52:10.809857   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(Bsecret "test-secret" deleted
core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(BE0114 22:52:11.436304   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(BE0114 22:52:11.561778   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E0114 22:52:11.686406   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
E0114 22:52:11.810867   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
secret/secret-string-data created
core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(BI0114 22:52:12.390250   54738 namespace_controller.go:185] Namespace has been deleted my-namespace
core.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(BE0114 22:52:12.437520   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(BE0114 22:52:12.562982   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "secret-string-data" deleted
E0114 22:52:12.687673   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:52:12.812104   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
I0114 22:52:12.915091   54738 namespace_controller.go:185] Namespace has been deleted kube-node-lease
I0114 22:52:12.950789   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042192-60
I0114 22:52:12.950793   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042187-16614
I0114 22:52:12.952685   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042166-755
I0114 22:52:12.957079   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042193-7096
... skipping 11 lines ...
I0114 22:52:13.227787   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042223-19608
I0114 22:52:13.227802   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042228-27465
I0114 22:52:13.261720   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042228-1204
I0114 22:52:13.278132   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042224-12404
I0114 22:52:13.293980   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042232-20565
I0114 22:52:13.425711   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042234-16180
E0114 22:52:13.438782   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:13.468901   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042253-7491
I0114 22:52:13.480379   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042254-6522
I0114 22:52:13.489678   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042255-11386
I0114 22:52:13.491363   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042271-11417
I0114 22:52:13.501316   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042276-16875
I0114 22:52:13.525150   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042264-8346
I0114 22:52:13.542831   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042235-31634
I0114 22:52:13.550471   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042276-29776
I0114 22:52:13.551985   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042272-9629
E0114 22:52:13.564535   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:13.618428   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042277-10694
I0114 22:52:13.650760   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042279-11873
I0114 22:52:13.658192   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042281-8634
I0114 22:52:13.664878   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042302-7487
E0114 22:52:13.689247   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:13.714055   54738 namespace_controller.go:185] Namespace has been deleted namespace-1579042303-25628
E0114 22:52:13.813395   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:14.229381   54738 namespace_controller.go:185] Namespace has been deleted other
E0114 22:52:14.440064   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:14.565741   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:14.690545   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:14.814342   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:15.441300   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:15.566973   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:15.692154   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:15.815532   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:16.442699   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:16.568333   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:16.693514   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:16.816755   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:17.444139   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:17.569526   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:17.694855   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:17.817944   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_configmap_tests
+++ [0114 22:52:18] Creating namespace namespace-1579042338-24538
namespace/namespace-1579042338-24538 created
Context "test" modified.
+++ [0114 22:52:18] Testing configmaps
E0114 22:52:18.445344   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:18.570644   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap/test-configmap created
E0114 22:52:18.696323   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(BE0114 22:52:18.819421   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap "test-configmap" deleted
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(BE0114 22:52:19.446844   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap/test-configmap created
configmap/test-binary-configmap created
E0114 22:52:19.571760   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(BE0114 22:52:19.697456   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(BE0114 22:52:19.820673   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
E0114 22:52:20.448401   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:20.572980   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:20.698652   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:20.821841   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:21.449863   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:21.574152   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:21.700040   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:21.823132   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:22.451265   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:22.575403   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:22.701256   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:22.824501   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:23.097127   54738 namespace_controller.go:185] Namespace has been deleted test-secrets
E0114 22:52:23.452578   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:23.576816   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:23.702556   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:23.825707   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:24.453887   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:24.578149   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:24.704035   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:24.826939   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
E0114 22:52:25.455361   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [0114 22:52:25] Creating namespace namespace-1579042345-8428
namespace/namespace-1579042345-8428 created
E0114 22:52:25.579281   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:52:25] Testing client config
E0114 22:52:25.705332   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
E0114 22:52:25.828306   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
E0114 22:52:26.457042   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_service_accounts_tests
+++ [0114 22:52:26] Creating namespace namespace-1579042346-23327
E0114 22:52:26.580611   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042346-23327 created
E0114 22:52:26.706722   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:52:26] Testing service accounts
E0114 22:52:26.829571   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
(Bnamespace/test-service-accounts created
core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
(Bserviceaccount/test-service-account created
core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
(Bserviceaccount "test-service-account" deleted
E0114 22:52:27.458009   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-service-accounts" deleted
E0114 22:52:27.581962   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:27.708139   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:27.830702   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:28.459222   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:28.583069   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:28.709383   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:28.831850   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:29.461226   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:29.584720   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:29.710993   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:29.833703   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:30.342265   54738 namespace_controller.go:185] Namespace has been deleted test-configmaps
E0114 22:52:30.462963   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:30.585941   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:30.712442   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:30.834962   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:31.464501   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:31.587166   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:31.713679   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:31.836234   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:32.465860   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:32.588581   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_job_tests
Running command: run_job_tests
E0114 22:52:32.715011   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource

+++ Running case: test-cmd.run_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_job_tests
+++ [0114 22:52:32] Creating namespace namespace-1579042352-13960
E0114 22:52:32.837782   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042352-13960 created
Context "test" modified.
+++ [0114 22:52:32] Testing job
batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
(Bnamespace/test-jobs created
batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
(Bkubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/pi created
batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
(BE0114 22:52:33.467016   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
NAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pi     59 23 31 2 *   False     0        <none>          0s
E0114 22:52:33.589798   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Name:                          pi
Namespace:                     test-jobs
Labels:                        run=pi
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  run=pi
... skipping 13 lines ...
    Environment:     <none>
    Mounts:          <none>
  Volumes:           <none>
Last Schedule Time:  <unset>
Active Jobs:         <none>
Events:              <none>
E0114 22:52:33.716805   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:job.batch/test-job
has:job.batch/test-job
E0114 22:52:33.838953   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
(BI0114 22:52:33.984310   54738 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"b04937b5-d284-41a7-8916-885b279987e7", APIVersion:"batch/v1", ResourceVersion:"1495", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-qvndq
job.batch/test-job created
batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
(BNAME       COMPLETIONS   DURATION   AGE
test-job   0/1           1s         1s
... skipping 5 lines ...
                run=pi
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Tue, 14 Jan 2020 22:52:33 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=b04937b5-d284-41a7-8916-885b279987e7
           job-name=test-job
           run=pi
  Containers:
   pi:
... skipping 13 lines ...
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  1s    job-controller  Created pod: test-job-qvndq
job.batch "test-job" deleted
E0114 22:52:34.468478   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
cronjob.batch "pi" deleted
namespace "test-jobs" deleted
E0114 22:52:34.590672   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:34.718174   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:34.840288   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:35.469827   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:35.592027   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:35.719451   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:35.841471   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:36.471178   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:36.593355   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:36.721031   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:36.842570   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:37.472803   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:37.594630   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:37.629003   54738 namespace_controller.go:185] Namespace has been deleted test-service-accounts
E0114 22:52:37.722414   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:37.844069   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:38.474212   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:38.595908   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:38.723833   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:38.845325   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:39.475396   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:39.596721   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
E0114 22:52:39.725080   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_create_job_tests
Running command: run_create_job_tests

+++ Running case: test-cmd.run_create_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_job_tests
+++ [0114 22:52:39] Creating namespace namespace-1579042359-31374
E0114 22:52:39.846594   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042359-31374 created
Context "test" modified.
I0114 22:52:40.052597   54738 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1579042359-31374", Name:"test-job", UID:"c0a22787-642e-4561-965b-455ed746274b", APIVersion:"batch/v1", ResourceVersion:"1516", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-6nrd4
job.batch/test-job created
create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
(Bjob.batch "test-job" deleted
I0114 22:52:40.348619   54738 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1579042359-31374", Name:"test-job-pi", UID:"ae45d465-9d8b-473e-b548-a167d826378f", APIVersion:"batch/v1", ResourceVersion:"1525", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-jtjgv
job.batch/test-job-pi created
create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
(BE0114 22:52:40.476726   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
job.batch "test-job-pi" deleted
E0114 22:52:40.597809   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/test-pi created
E0114 22:52:40.726332   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:40.774701   54738 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1579042359-31374", Name:"my-pi", UID:"731c6578-baf2-4de7-9653-7d8c04bed73f", APIVersion:"batch/v1", ResourceVersion:"1533", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-8tzcr
job.batch/my-pi created
E0114 22:52:40.848119   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:[perl -Mbignum=bpi -wle print bpi(10)]
has:perl -Mbignum=bpi -wle print bpi(10)
job.batch "my-pi" deleted
cronjob.batch "test-pi" deleted
+++ exit code: 0
... skipping 4 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_pod_templates_tests
+++ [0114 22:52:41] Creating namespace namespace-1579042361-17751
namespace/namespace-1579042361-17751 created
Context "test" modified.
+++ [0114 22:52:41] Testing pod templates
E0114 22:52:41.477749   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1421: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:52:41.599152   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:41.695020   51297 controller.go:606] quota admission added evaluator for: podtemplates
podtemplate/nginx created
E0114 22:52:41.727478   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1425: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE0114 22:52:41.849359   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
NAME    CONTAINERS   IMAGES   POD LABELS
nginx   nginx        nginx    name=nginx
core.sh:1433: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bpodtemplate "nginx" deleted
core.sh:1437: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ exit code: 0
Recording: run_service_tests
Running command: run_service_tests

+++ Running case: test-cmd.run_service_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_service_tests
E0114 22:52:42.478918   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:52:42] Testing kubectl(v1:services)
E0114 22:52:42.600339   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:52:42.728586   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master created
E0114 22:52:42.850423   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bmatched Name:
matched Labels:
matched Selector:
matched IP:
matched Port:
... skipping 58 lines ...
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(B
E0114 22:52:43.480475   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Labels:
matched Selector:
matched IP:
matched Port:
matched Endpoints:
... skipping 25 lines ...
IP:                10.0.0.159
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(BE0114 22:52:43.601580   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
... skipping 18 lines ...
IP:                10.0.0.159
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(BE0114 22:52:43.730346   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
... skipping 16 lines ...
Type:              ClusterIP
IP:                10.0.0.159
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
(BE0114 22:52:43.851725   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
... skipping 63 lines ...
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
service/redis-master selector updated
core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
(BE0114 22:52:44.481708   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master selector updated
E0114 22:52:44.602848   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(BI0114 22:52:44.675376   54738 namespace_controller.go:185] Namespace has been deleted test-jobs
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-01-14T22:52:42Z"
... skipping 15 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
E0114 22:52:44.731566   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
E0114 22:52:44.852962   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:911: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(BE0114 22:52:45.482822   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "redis-master" deleted
E0114 22:52:45.604110   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:918: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:52:45.733185   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:922: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:52:45.854483   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master created
core.sh:926: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bcore.sh:930: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice/service-v1-test created
E0114 22:52:46.484388   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:951: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(BE0114 22:52:46.605400   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:46.734584   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/service-v1-test replaced
E0114 22:52:46.855777   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:958: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(Bservice "redis-master" deleted
service "service-v1-test" deleted
core.sh:966: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:970: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:52:47.485725   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:47.606447   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master created
E0114 22:52:47.735797   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-slave created
E0114 22:52:47.857087   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:975: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(BSuccessful
message:NAME           RSRC
kubernetes     144
redis-master   1574
redis-slave    1577
has:redis-master
core.sh:985: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(Bservice "redis-master" deleted
service "redis-slave" deleted
core.sh:992: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:52:48.487082   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:996: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE0114 22:52:48.607803   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/beep-boop created
E0114 22:52:48.737549   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1000: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
(BE0114 22:52:48.858302   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1004: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
(Bservice "beep-boop" deleted
core.sh:1011: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1015: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bkubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
I0114 22:52:49.342392   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"d4d149c1-caf0-4f57-818f-03548a3ba4c7", APIVersion:"apps/v1", ResourceVersion:"1591", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
I0114 22:52:49.350919   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"0861f959-ef5d-4cad-afde-ae3ef7598ca3", APIVersion:"apps/v1", ResourceVersion:"1592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-8tc9k
I0114 22:52:49.354424   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"0861f959-ef5d-4cad-afde-ae3ef7598ca3", APIVersion:"apps/v1", ResourceVersion:"1592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-5bzwc
service/testmetadata created
deployment.apps/testmetadata created
core.sh:1019: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
(BE0114 22:52:49.488435   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1020: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
(BE0114 22:52:49.608887   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/exposemetadata exposed
E0114 22:52:49.738755   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1026: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
(BE0114 22:52:49.859642   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "exposemetadata" deleted
service "testmetadata" deleted
deployment.apps "testmetadata" deleted
+++ exit code: 0
Recording: run_daemonset_tests
Running command: run_daemonset_tests
... skipping 3 lines ...
+++ command: run_daemonset_tests
+++ [0114 22:52:50] Creating namespace namespace-1579042370-10485
namespace/namespace-1579042370-10485 created
Context "test" modified.
+++ [0114 22:52:50] Testing kubectl(v1:daemonsets)
apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:52:50.489615   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:52:50.592960   51297 controller.go:606] quota admission added evaluator for: daemonsets.apps
daemonset.apps/bind created
I0114 22:52:50.602494   51297 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
E0114 22:52:50.609734   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
(BE0114 22:52:50.740119   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:50.860967   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind configured
apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind image updated
apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
(Bdaemonset.apps/bind env updated
E0114 22:52:51.490899   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
(Bdaemonset.apps/bind resource requirements updated
E0114 22:52:51.610832   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
(BE0114 22:52:51.741309   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind restarted
E0114 22:52:51.862426   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_daemonset_history_tests
Running command: run_daemonset_history_tests

... skipping 2 lines ...
+++ command: run_daemonset_history_tests
+++ [0114 22:52:52] Creating namespace namespace-1579042372-9881
namespace/namespace-1579042372-9881 created
Context "test" modified.
+++ [0114 22:52:52] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:52:52.492151   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:52.612211   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind created
E0114 22:52:52.742389   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1579042372-9881"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
E0114 22:52:52.863455   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind configured
apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(BE0114 22:52:53.493309   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:52:53.613196   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(BE0114 22:52:53.743541   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1579042372-9881"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1579042372-9881"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(Bdaemonset.apps/bind will roll back to Pod Template:
  Labels:	service=bind
  Containers:
... skipping 2 lines ...
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
 (dry run)
E0114 22:52:53.864798   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
E0114 22:52:54.326119   54738 daemon_controller.go:291] namespace-1579042372-9881/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1579042372-9881", SelfLink:"/apis/apps/v1/namespaces/namespace-1579042372-9881/daemonsets/bind", UID:"fee74432-4aea-4ecf-8ea8-73062ea473ad", ResourceVersion:"1659", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714639172, loc:(*time.Location)(0x6b23a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1579042372-9881\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001bc4940), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f561d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00192df20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001bc4960), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0015186e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002f5622c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(BE0114 22:52:54.494664   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BE0114 22:52:54.614325   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
E0114 22:52:54.744999   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(BE0114 22:52:54.866028   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0114 22:52:54.997147   54738 daemon_controller.go:291] namespace-1579042372-9881/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1579042372-9881", SelfLink:"/apis/apps/v1/namespaces/namespace-1579042372-9881/daemonsets/bind", UID:"fee74432-4aea-4ecf-8ea8-73062ea473ad", ResourceVersion:"1662", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63714639172, loc:(*time.Location)(0x6b23a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1579042372-9881\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001b726c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002e9a4c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022be1e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001b72700), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001518098)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002e9a51c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
E0114 22:52:55.496113   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_rc_tests
Running command: run_rc_tests

+++ Running case: test-cmd.run_rc_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rc_tests
+++ [0114 22:52:55] Creating namespace namespace-1579042375-7688
E0114 22:52:55.615613   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042375-7688 created
Context "test" modified.
+++ [0114 22:52:55] Testing kubectl(v1:replicationcontrollers)
E0114 22:52:55.746416   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1052: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:52:55.867222   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0114 22:52:56.038702   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"ddd5900c-bd88-4935-a1d5-348065126254", APIVersion:"v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mmw65
I0114 22:52:56.042077   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"ddd5900c-bd88-4935-a1d5-348065126254", APIVersion:"v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tfd6v
I0114 22:52:56.042124   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"ddd5900c-bd88-4935-a1d5-348065126254", APIVersion:"v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f85np
replicationcontroller "frontend" deleted
core.sh:1057: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1061: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:52:56.497385   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:56.616961   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0114 22:52:56.638098   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"e7fc6695-5959-4bf7-9414-b442486c360a", APIVersion:"v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-llzlg
I0114 22:52:56.640809   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"e7fc6695-5959-4bf7-9414-b442486c360a", APIVersion:"v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qxpwr
I0114 22:52:56.642835   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"e7fc6695-5959-4bf7-9414-b442486c360a", APIVersion:"v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t7dmk
E0114 22:52:56.747454   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1065: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0114 22:52:56.868728   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Pod Template:
matched Labels:
matched Selector:
matched Replicas:
matched Pods Status:
... skipping 4 lines ...
Namespace:    namespace-1579042375-7688
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1579042375-7688
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1579042375-7688
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1579042375-7688
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-llzlg
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-qxpwr
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-t7dmk
(B
matched Name:
E0114 22:52:57.498807   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Pod Template:
matched Labels:
matched Selector:
matched Replicas:
matched Pods Status:
... skipping 4 lines ...
Namespace:    namespace-1579042375-7688
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-llzlg
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-qxpwr
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-t7dmk
(BE0114 22:52:57.618112   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1579042375-7688
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-llzlg
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-qxpwr
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-t7dmk
(BE0114 22:52:57.748704   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1579042375-7688
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 3 lines ...
      cpu:     100m
      memory:  100Mi
    Environment:
      GET_HOSTS_FROM:  dns
    Mounts:            <none>
  Volumes:             <none>
(BE0114 22:52:57.869835   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1579042375-7688
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1085: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0114 22:52:58.194467   54738 replica_set.go:199] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1579042375-7688 /api/v1/namespaces/namespace-1579042375-7688/replicationcontrollers/frontend e7fc6695-5959-4bf7-9414-b442486c360a 1699 2 2020-01-14 22:52:56 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e0b618 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0114 22:52:58.202606   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"e7fc6695-5959-4bf7-9414-b442486c360a", APIVersion:"v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-llzlg
core.sh:1089: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1093: Successful get rc frontend {{.spec.replicas}}: 2
(BE0114 22:52:58.501055   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: Expected replicas to be 3, was 2
E0114 22:52:58.619192   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1097: Successful get rc frontend {{.spec.replicas}}: 2
(BE0114 22:52:58.749955   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1101: Successful get rc frontend {{.spec.replicas}}: 2
(BE0114 22:52:58.871267   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend scaled
I0114 22:52:58.891726   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"e7fc6695-5959-4bf7-9414-b442486c360a", APIVersion:"v1", ResourceVersion:"1705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-84xcs
core.sh:1105: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1109: Successful get rc frontend {{.spec.replicas}}: 3
(BE0114 22:52:59.240672   54738 replica_set.go:199] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1579042375-7688 /api/v1/namespaces/namespace-1579042375-7688/replicationcontrollers/frontend e7fc6695-5959-4bf7-9414-b442486c360a 1712 4 2020-01-14 22:52:56 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002a3c748 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
replicationcontroller/frontend scaled
I0114 22:52:59.246358   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"e7fc6695-5959-4bf7-9414-b442486c360a", APIVersion:"v1", ResourceVersion:"1712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-84xcs
core.sh:1113: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller "frontend" deleted
E0114 22:52:59.502300   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:52:59.620610   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-master created
I0114 22:52:59.647306   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-master", UID:"53529de9-ea93-45f6-a037-7941e72768ef", APIVersion:"v1", ResourceVersion:"1723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-mcg7z
E0114 22:52:59.751397   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-slave created
I0114 22:52:59.854832   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-slave", UID:"e777694e-8c1b-42c9-a761-598e7b438c34", APIVersion:"v1", ResourceVersion:"1728", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-z59cv
I0114 22:52:59.858521   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-slave", UID:"e777694e-8c1b-42c9-a761-598e7b438c34", APIVersion:"v1", ResourceVersion:"1728", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-dr7xp
E0114 22:52:59.872365   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-master scaled
I0114 22:52:59.977451   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-master", UID:"53529de9-ea93-45f6-a037-7941e72768ef", APIVersion:"v1", ResourceVersion:"1735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-pxrfl
replicationcontroller/redis-slave scaled
I0114 22:52:59.981620   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-slave", UID:"e777694e-8c1b-42c9-a761-598e7b438c34", APIVersion:"v1", ResourceVersion:"1737", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-rjndg
I0114 22:52:59.982007   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-master", UID:"53529de9-ea93-45f6-a037-7941e72768ef", APIVersion:"v1", ResourceVersion:"1735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-kl9r5
I0114 22:52:59.982042   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-master", UID:"53529de9-ea93-45f6-a037-7941e72768ef", APIVersion:"v1", ResourceVersion:"1735", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-p7788
I0114 22:52:59.984560   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-slave", UID:"e777694e-8c1b-42c9-a761-598e7b438c34", APIVersion:"v1", ResourceVersion:"1737", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-24tmk
core.sh:1123: Successful get rc redis-master {{.spec.replicas}}: 4
(Bcore.sh:1124: Successful get rc redis-slave {{.spec.replicas}}: 4
(Breplicationcontroller "redis-master" deleted
replicationcontroller "redis-slave" deleted
E0114 22:53:00.341331   54738 replica_set.go:534] sync "namespace-1579042375-7688/redis-slave" failed with Operation cannot be fulfilled on replicationcontrollers "redis-slave": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1579042375-7688/redis-slave, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e777694e-8c1b-42c9-a761-598e7b438c34, UID in object meta: 
E0114 22:53:00.503598   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:53:00.514649   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment", UID:"2c98d0e3-de54-49f7-8f7b-70a0cf710e9e", APIVersion:"apps/v1", ResourceVersion:"1769", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0114 22:53:00.518009   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-6986c7bc94", UID:"96eaf4ba-dced-4042-9ac8-35c3dbcf2f99", APIVersion:"apps/v1", ResourceVersion:"1770", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-sk9wm
I0114 22:53:00.522874   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-6986c7bc94", UID:"96eaf4ba-dced-4042-9ac8-35c3dbcf2f99", APIVersion:"apps/v1", ResourceVersion:"1770", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-kw6pb
I0114 22:53:00.524877   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-6986c7bc94", UID:"96eaf4ba-dced-4042-9ac8-35c3dbcf2f99", APIVersion:"apps/v1", ResourceVersion:"1770", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-z9fjt
E0114 22:53:00.621813   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment scaled
I0114 22:53:00.633208   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment", UID:"2c98d0e3-de54-49f7-8f7b-70a0cf710e9e", APIVersion:"apps/v1", ResourceVersion:"1783", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
I0114 22:53:00.642063   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-6986c7bc94", UID:"96eaf4ba-dced-4042-9ac8-35c3dbcf2f99", APIVersion:"apps/v1", ResourceVersion:"1784", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-z9fjt
I0114 22:53:00.644523   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-6986c7bc94", UID:"96eaf4ba-dced-4042-9ac8-35c3dbcf2f99", APIVersion:"apps/v1", ResourceVersion:"1784", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-kw6pb
core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
(BE0114 22:53:00.752839   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment" deleted
E0114 22:53:00.873431   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0114 22:53:01.378096   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment", UID:"efce517f-4e18-4ecf-b4a5-4fb6f08d8332", APIVersion:"apps/v1", ResourceVersion:"1809", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0114 22:53:01.381460   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-6986c7bc94", UID:"868399f9-abb3-4528-82aa-99c14ca532f4", APIVersion:"apps/v1", ResourceVersion:"1810", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-mbtqj
I0114 22:53:01.384243   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-6986c7bc94", UID:"868399f9-abb3-4528-82aa-99c14ca532f4", APIVersion:"apps/v1", ResourceVersion:"1810", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-z79xr
I0114 22:53:01.385327   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-6986c7bc94", UID:"868399f9-abb3-4528-82aa-99c14ca532f4", APIVersion:"apps/v1", ResourceVersion:"1810", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-nrl9k
core.sh:1152: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
(BE0114 22:53:01.504970   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/nginx-deployment exposed
E0114 22:53:01.622999   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1156: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
(BE0114 22:53:01.754200   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment" deleted
service "nginx-deployment" deleted
E0114 22:53:01.874822   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0114 22:53:02.044897   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"b1814824-7e98-4206-9c59-7384c318839d", APIVersion:"v1", ResourceVersion:"1838", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f6r5h
I0114 22:53:02.048424   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"b1814824-7e98-4206-9c59-7384c318839d", APIVersion:"v1", ResourceVersion:"1838", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5htxq
I0114 22:53:02.049755   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"b1814824-7e98-4206-9c59-7384c318839d", APIVersion:"v1", ResourceVersion:"1838", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l7g2f
core.sh:1163: Successful get rc frontend {{.spec.replicas}}: 3
(Bservice/frontend exposed
core.sh:1167: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BE0114 22:53:02.506268   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend-2 exposed
E0114 22:53:02.624263   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1171: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
(BE0114 22:53:02.756183   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/valid-pod created
E0114 22:53:02.875804   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend-3 exposed
core.sh:1176: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
(Bservice/frontend-4 exposed
core.sh:1180: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
(Bservice/frontend-5 exposed
E0114 22:53:03.507494   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1184: Successful get service frontend-5 {{(index .spec.ports 0).port}}: 80
(BE0114 22:53:03.625402   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod "valid-pod" deleted
E0114 22:53:03.757426   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
E0114 22:53:03.877169   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
has:kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
service "kubernetes-serve-hostname-testing-sixty-three-characters-in-len" deleted
Successful
message:service/etcd-server exposed
has:etcd-server exposed
core.sh:1214: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
(BE0114 22:53:04.508678   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1215: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
(BE0114 22:53:04.626606   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "etcd-server" deleted
E0114 22:53:04.758564   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1221: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(Breplicationcontroller "frontend" deleted
E0114 22:53:04.878150   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1225: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1229: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
I0114 22:53:05.320991   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"6fa4dd42-18f8-4cd4-bb17-5d0064e2c849", APIVersion:"v1", ResourceVersion:"1903", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s2jwt
I0114 22:53:05.323248   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"6fa4dd42-18f8-4cd4-bb17-5d0064e2c849", APIVersion:"v1", ResourceVersion:"1903", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l44dv
I0114 22:53:05.327482   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"6fa4dd42-18f8-4cd4-bb17-5d0064e2c849", APIVersion:"v1", ResourceVersion:"1903", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gqcxb
E0114 22:53:05.509992   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-slave created
I0114 22:53:05.554289   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-slave", UID:"64970513-3b90-477d-b6e5-c6bd9d488bbd", APIVersion:"v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-cm5n6
I0114 22:53:05.564837   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"redis-slave", UID:"64970513-3b90-477d-b6e5-c6bd9d488bbd", APIVersion:"v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-htndj
E0114 22:53:05.627708   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1234: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(BE0114 22:53:05.759713   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1238: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Breplicationcontroller "frontend" deleted
E0114 22:53:05.879701   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller "redis-slave" deleted
core.sh:1242: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1246: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
I0114 22:53:06.350782   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"50426863-4a8b-44bb-878e-bb5782d8f3ee", APIVersion:"v1", ResourceVersion:"1931", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-59w6c
I0114 22:53:06.356242   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"50426863-4a8b-44bb-878e-bb5782d8f3ee", APIVersion:"v1", ResourceVersion:"1931", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zzcwp
I0114 22:53:06.356976   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1579042375-7688", Name:"frontend", UID:"50426863-4a8b-44bb-878e-bb5782d8f3ee", APIVersion:"v1", ResourceVersion:"1931", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pdhsk
core.sh:1249: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0114 22:53:06.511240   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling/frontend autoscaled
E0114 22:53:06.628911   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1252: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(BE0114 22:53:06.761142   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling "frontend" deleted
E0114 22:53:06.881232   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1256: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:07.512607   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    name: nginx-deployment-resources
... skipping 22 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
E0114 22:53:07.630024   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
E0114 22:53:07.762521   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:07.882485   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment-resources created
I0114 22:53:07.923485   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources", UID:"bc6b4f44-9db5-4407-a4ca-ac187a44a84a", APIVersion:"apps/v1", ResourceVersion:"1953", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
I0114 22:53:07.929689   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources-67f8cfff5", UID:"a8d6cd2c-254f-4d2a-a7f8-0558a1c02399", APIVersion:"apps/v1", ResourceVersion:"1954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-9gxlc
I0114 22:53:07.932347   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources-67f8cfff5", UID:"a8d6cd2c-254f-4d2a-a7f8-0558a1c02399", APIVersion:"apps/v1", ResourceVersion:"1954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-tmjt2
I0114 22:53:07.934776   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources-67f8cfff5", UID:"a8d6cd2c-254f-4d2a-a7f8-0558a1c02399", APIVersion:"apps/v1", ResourceVersion:"1954", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-9zw7f
core.sh:1271: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1272: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0114 22:53:08.438513   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources", UID:"bc6b4f44-9db5-4407-a4ca-ac187a44a84a", APIVersion:"apps/v1", ResourceVersion:"1967", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
I0114 22:53:08.444768   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources-55c547f795", UID:"73a5e3be-e2f7-48c4-96f4-b17ca3fad31b", APIVersion:"apps/v1", ResourceVersion:"1968", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-7k64x
E0114 22:53:08.513737   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1276: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(BE0114 22:53:08.631244   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1277: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(BE0114 22:53:08.763804   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: unable to find container named redis
E0114 22:53:08.883818   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment-resources resource requirements updated
I0114 22:53:08.938169   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources", UID:"bc6b4f44-9db5-4407-a4ca-ac187a44a84a", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
I0114 22:53:08.946110   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources", UID:"bc6b4f44-9db5-4407-a4ca-ac187a44a84a", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
I0114 22:53:08.950307   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources-6d86564b45", UID:"f127cccb-04db-4012-af39-ca2135776e50", APIVersion:"apps/v1", ResourceVersion:"1983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-bfc9g
I0114 22:53:08.953608   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources-67f8cfff5", UID:"a8d6cd2c-254f-4d2a-a7f8-0558a1c02399", APIVersion:"apps/v1", ResourceVersion:"1981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-9gxlc
core.sh:1282: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0114 22:53:09.290684   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources", UID:"bc6b4f44-9db5-4407-a4ca-ac187a44a84a", APIVersion:"apps/v1", ResourceVersion:"2000", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 1
I0114 22:53:09.298123   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources", UID:"bc6b4f44-9db5-4407-a4ca-ac187a44a84a", APIVersion:"apps/v1", ResourceVersion:"2003", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c478d4fdb to 1
I0114 22:53:09.298657   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources-67f8cfff5", UID:"a8d6cd2c-254f-4d2a-a7f8-0558a1c02399", APIVersion:"apps/v1", ResourceVersion:"2004", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-tmjt2
I0114 22:53:09.303395   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042375-7688", Name:"nginx-deployment-resources-6c478d4fdb", UID:"c2f32611-553a-4945-ada8-e91ead29482c", APIVersion:"apps/v1", ResourceVersion:"2007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6c478d4fdb-jfkjt
core.sh:1286: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(BE0114 22:53:09.515551   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1288: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
(BE0114 22:53:09.633194   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "4"
  creationTimestamp: "2020-01-14T22:53:07Z"
... skipping 65 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
E0114 22:53:09.765445   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
E0114 22:53:09.885145   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1292: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1293: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1294: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
(Bdeployment.apps "nginx-deployment-resources" deleted
+++ exit code: 0
Recording: run_deployment_tests
Running command: run_deployment_tests

+++ Running case: test-cmd.run_deployment_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_deployment_tests
+++ [0114 22:53:10] Creating namespace namespace-1579042390-12063
namespace/namespace-1579042390-12063 created
E0114 22:53:10.516867   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:53:10] Testing deployments
deployment.apps/test-nginx-extensions created
I0114 22:53:10.631421   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"test-nginx-extensions", UID:"e2eb2f3d-141a-433f-83da-7ce9b0ebf443", APIVersion:"apps/v1", ResourceVersion:"2036", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-extensions-5559c76db7 to 1
E0114 22:53:10.634215   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:53:10.639807   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"test-nginx-extensions-5559c76db7", UID:"c742a137-d6d8-4124-b530-23ec23f2b692", APIVersion:"apps/v1", ResourceVersion:"2037", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-nginx-extensions-5559c76db7-tbgw6
apps.sh:185: Successful get deploy test-nginx-extensions {{(index .spec.template.spec.containers 0).name}}: nginx
(BE0114 22:53:10.766723   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:10
has not:2
E0114 22:53:10.886422   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:apps/v1
has:apps/v1
deployment.apps "test-nginx-extensions" deleted
deployment.apps/test-nginx-apps created
I0114 22:53:11.139746   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"test-nginx-apps", UID:"d6757f2a-07dd-40e4-bbdd-f23c96920679", APIVersion:"apps/v1", ResourceVersion:"2050", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-nginx-apps-79b9bd9585 to 1
... skipping 2 lines ...
(BSuccessful
message:10
has:10
Successful
message:apps/v1
has:apps/v1
E0114 22:53:11.518282   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Pod Template:
matched Labels:
matched Selector:
matched Controlled By
matched Replicas:
... skipping 7 lines ...
                pod-template-hash=79b9bd9585
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=79b9bd9585
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 3 lines ...
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: test-nginx-apps-79b9bd9585-c6cdx
(BE0114 22:53:11.635434   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Image:
matched Node:
matched Labels:
matched Status:
matched Controlled By
... skipping 18 lines ...
    Mounts:       <none>
Volumes:          <none>
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
(BE0114 22:53:11.767937   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "test-nginx-apps" deleted
E0114 22:53:11.887594   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:214: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-with-command created
I0114 22:53:12.024191   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-with-command", UID:"ab1d4e39-ef19-4a95-82ca-c12519744316", APIVersion:"apps/v1", ResourceVersion:"2067", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-with-command-757c6f58dd to 1
I0114 22:53:12.029956   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-with-command-757c6f58dd", UID:"8c0d4ec3-1983-4048-873e-4d42d3205856", APIVersion:"apps/v1", ResourceVersion:"2068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-with-command-757c6f58dd-7bj4r
apps.sh:218: Successful get deploy nginx-with-command {{(index .spec.template.spec.containers 0).name}}: nginx
(Bdeployment.apps "nginx-with-command" deleted
apps.sh:224: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:12.519604   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/deployment-with-unixuserid created
I0114 22:53:12.539751   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"deployment-with-unixuserid", UID:"35535271-5724-4474-8e4d-83906ec313ad", APIVersion:"apps/v1", ResourceVersion:"2081", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-with-unixuserid-8fcdfc94f to 1
I0114 22:53:12.543565   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"deployment-with-unixuserid-8fcdfc94f", UID:"c693115f-bda6-41eb-a53b-3ff509975563", APIVersion:"apps/v1", ResourceVersion:"2082", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-with-unixuserid-8fcdfc94f-xtv9h
E0114 22:53:12.636620   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: deployment-with-unixuserid:
(Bdeployment.apps "deployment-with-unixuserid" deleted
E0114 22:53:12.768877   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:235: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:12.888826   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:53:13.075005   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"ed328ff8-cb9f-4ebd-b874-6df7dc90b0b4", APIVersion:"apps/v1", ResourceVersion:"2095", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0114 22:53:13.079895   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-6986c7bc94", UID:"f4defe9d-b8b0-434b-b62e-160b20bf993a", APIVersion:"apps/v1", ResourceVersion:"2096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-dr2r4
I0114 22:53:13.083088   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-6986c7bc94", UID:"f4defe9d-b8b0-434b-b62e-160b20bf993a", APIVersion:"apps/v1", ResourceVersion:"2096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-6wc8g
I0114 22:53:13.086569   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-6986c7bc94", UID:"f4defe9d-b8b0-434b-b62e-160b20bf993a", APIVersion:"apps/v1", ResourceVersion:"2096", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-5srcf
apps.sh:239: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 3
(Bdeployment.apps "nginx-deployment" deleted
apps.sh:242: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:246: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:13.520936   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:247: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:13.637761   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:53:13.697603   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"618f159f-55eb-428d-a14b-a7491d1c70f2", APIVersion:"apps/v1", ResourceVersion:"2121", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7f6fc565b9 to 1
I0114 22:53:13.701239   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-7f6fc565b9", UID:"028cfec9-5e85-43ec-801b-17c081c4f3a9", APIVersion:"apps/v1", ResourceVersion:"2122", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7f6fc565b9-28cxz
E0114 22:53:13.770183   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:251: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
(BE0114 22:53:13.890063   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment" deleted
apps.sh:256: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:257: Successful get rs {{range.items}}{{.spec.replicas}}{{end}}: 1
(Breplicaset.apps "nginx-deployment-7f6fc565b9" deleted
apps.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:14.521993   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:53:14.625136   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"caf7e356-b021-4fce-8c68-0b4bb81403bb", APIVersion:"apps/v1", ResourceVersion:"2139", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0114 22:53:14.628954   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-6986c7bc94", UID:"4ea602ed-feea-444c-9f73-0fd1f124db5a", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-xqxlg
I0114 22:53:14.632697   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-6986c7bc94", UID:"4ea602ed-feea-444c-9f73-0fd1f124db5a", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-wd56p
I0114 22:53:14.633054   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-6986c7bc94", UID:"4ea602ed-feea-444c-9f73-0fd1f124db5a", APIVersion:"apps/v1", ResourceVersion:"2140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-vpzsp
E0114 22:53:14.638710   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:268: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(BE0114 22:53:14.771366   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling/nginx-deployment autoscaled
E0114 22:53:14.891170   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:271: Successful get hpa nginx-deployment {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "nginx-deployment" deleted
deployment.apps "nginx-deployment" deleted
apps.sh:279: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0114 22:53:15.470867   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx", UID:"27741e20-85a9-4ecd-8d7e-e2264f576de1", APIVersion:"apps/v1", ResourceVersion:"2165", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I0114 22:53:15.474438   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-f87d999f7", UID:"6ff49e57-7118-4ab1-8a93-6115351e23dd", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-wvwhv
I0114 22:53:15.478391   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-f87d999f7", UID:"6ff49e57-7118-4ab1-8a93-6115351e23dd", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-ddgpk
I0114 22:53:15.478431   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-f87d999f7", UID:"6ff49e57-7118-4ab1-8a93-6115351e23dd", APIVersion:"apps/v1", ResourceVersion:"2166", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-hsltx
E0114 22:53:15.523228   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:283: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE0114 22:53:15.639953   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:53:15.772731   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx skipped rollback (current template already matches revision 1)
E0114 22:53:15.892581   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:287: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/nginx configured
I0114 22:53:16.133458   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx", UID:"27741e20-85a9-4ecd-8d7e-e2264f576de1", APIVersion:"apps/v1", ResourceVersion:"2179", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-78487f9fd7 to 1
I0114 22:53:16.136523   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-78487f9fd7", UID:"e5994ded-4dfc-4f1d-8baf-d93ff9b43cdd", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-zb796
apps.sh:290: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
apps.sh:293: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(BE0114 22:53:16.524562   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:16.641081   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx rolled back
E0114 22:53:16.775060   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:16.893756   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:17.525743   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:17.642458   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:297: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:53:17.776280   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: unable to find specified revision 1000000 in history
E0114 22:53:17.895074   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
E0114 22:53:18.526970   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:18.643625   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:18.777439   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:18.896592   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:304: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
E0114 22:53:19.528171   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
E0114 22:53:19.644870   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx resumed
E0114 22:53:19.778416   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx rolled back
E0114 22:53:19.897638   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0114 22:53:20.298671   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx", UID:"27741e20-85a9-4ecd-8d7e-e2264f576de1", APIVersion:"apps/v1", ResourceVersion:"2211", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-f87d999f7 to 2
I0114 22:53:20.309448   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx", UID:"27741e20-85a9-4ecd-8d7e-e2264f576de1", APIVersion:"apps/v1", ResourceVersion:"2214", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9c5c747cb to 1
I0114 22:53:20.310762   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-f87d999f7", UID:"6ff49e57-7118-4ab1-8a93-6115351e23dd", APIVersion:"apps/v1", ResourceVersion:"2215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-f87d999f7-wvwhv
I0114 22:53:20.314759   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-9c5c747cb", UID:"2d8ae52c-b332-4160-9ca9-b8087591c1cf", APIVersion:"apps/v1", ResourceVersion:"2218", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c5c747cb-7hlqd
E0114 22:53:20.529352   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:20.645940   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:20.779653   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:20.899052   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: "3"
... skipping 48 lines ...
      terminationGracePeriodSeconds: 30
status:
  fullyLabeledReplicas: 1
  observedGeneration: 2
  replicas: 1
has:deployment.kubernetes.io/revision: "6"
E0114 22:53:21.530610   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:53:21.575116   54738 horizontal.go:353] Horizontal Pod Autoscaler frontend has been deleted in namespace-1579042375-7688
E0114 22:53:21.647208   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx2 created
I0114 22:53:21.700134   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx2", UID:"2586af75-b880-4391-9da7-72a79ce04a5e", APIVersion:"apps/v1", ResourceVersion:"2234", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-57b7865cd9 to 3
I0114 22:53:21.704131   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx2-57b7865cd9", UID:"572fffb9-ff20-4521-bad8-d36d4ca61113", APIVersion:"apps/v1", ResourceVersion:"2235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-9nsm2
I0114 22:53:21.707252   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx2-57b7865cd9", UID:"572fffb9-ff20-4521-bad8-d36d4ca61113", APIVersion:"apps/v1", ResourceVersion:"2235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-f9dfm
I0114 22:53:21.707810   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx2-57b7865cd9", UID:"572fffb9-ff20-4521-bad8-d36d4ca61113", APIVersion:"apps/v1", ResourceVersion:"2235", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-57b7865cd9-8rhwj
E0114 22:53:21.780861   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx2" deleted
E0114 22:53:21.900540   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx" deleted
apps.sh:334: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0114 22:53:22.218260   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"6f529bc4-a82e-4be8-88ef-4bb8517d18d2", APIVersion:"apps/v1", ResourceVersion:"2268", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
I0114 22:53:22.220956   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"b5eb5aa0-2982-461c-96a5-00b0835d19c6", APIVersion:"apps/v1", ResourceVersion:"2269", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-hr466
I0114 22:53:22.224374   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"b5eb5aa0-2982-461c-96a5-00b0835d19c6", APIVersion:"apps/v1", ResourceVersion:"2269", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-mpmcn
I0114 22:53:22.229193   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"b5eb5aa0-2982-461c-96a5-00b0835d19c6", APIVersion:"apps/v1", ResourceVersion:"2269", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-5bh7h
apps.sh:337: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:53:22.532700   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:339: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0114 22:53:22.650302   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment image updated
I0114 22:53:22.673296   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"6f529bc4-a82e-4be8-88ef-4bb8517d18d2", APIVersion:"apps/v1", ResourceVersion:"2284", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
I0114 22:53:22.677884   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-59df9b5f5b", UID:"5613245e-e31b-409d-aae1-0423f91c6959", APIVersion:"apps/v1", ResourceVersion:"2285", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-rsblz
E0114 22:53:22.782122   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:342: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(BE0114 22:53:22.901556   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:349: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
E0114 22:53:23.533996   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(BE0114 22:53:23.651496   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:353: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0114 22:53:23.783396   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(BE0114 22:53:23.902825   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:357: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0114 22:53:24.092567   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"6f529bc4-a82e-4be8-88ef-4bb8517d18d2", APIVersion:"apps/v1", ResourceVersion:"2304", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
I0114 22:53:24.097592   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"b5eb5aa0-2982-461c-96a5-00b0835d19c6", APIVersion:"apps/v1", ResourceVersion:"2308", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-hr466
I0114 22:53:24.099655   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"6f529bc4-a82e-4be8-88ef-4bb8517d18d2", APIVersion:"apps/v1", ResourceVersion:"2307", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7d758dbc54 to 1
I0114 22:53:24.104179   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-7d758dbc54", UID:"1d48aa13-2d6c-4a95-91ea-f8f977aca46d", APIVersion:"apps/v1", ResourceVersion:"2312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7d758dbc54-r9dfl
apps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:361: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:53:24.535284   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE0114 22:53:24.652898   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx-deployment" deleted
E0114 22:53:24.784452   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:371: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:24.904259   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0114 22:53:25.043684   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2336", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
I0114 22:53:25.050469   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"df6262b0-8434-4657-8073-83348fcf8faa", APIVersion:"apps/v1", ResourceVersion:"2337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-kjkw9
I0114 22:53:25.054553   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"df6262b0-8434-4657-8073-83348fcf8faa", APIVersion:"apps/v1", ResourceVersion:"2337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-cpqfb
I0114 22:53:25.054612   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"df6262b0-8434-4657-8073-83348fcf8faa", APIVersion:"apps/v1", ResourceVersion:"2337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-knzdw
configmap/test-set-env-config created
secret/test-set-env-secret created
E0114 22:53:25.536762   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:376: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(BE0114 22:53:25.654655   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:378: Successful get configmaps/test-set-env-config {{.metadata.name}}: test-set-env-config
(BE0114 22:53:25.785689   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:379: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret:
(Bdeployment.apps/nginx-deployment env updated
I0114 22:53:25.905116   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2354", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6b9f7756b4 to 1
E0114 22:53:25.905243   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:53:25.909300   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-6b9f7756b4", UID:"3228ddb1-c73e-42ba-b7c7-d3810df05bf4", APIVersion:"apps/v1", ResourceVersion:"2355", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6b9f7756b4-cdgch
apps.sh:383: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
(Bapps.sh:385: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
(Bdeployment.apps/nginx-deployment env updated
I0114 22:53:26.264822   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2364", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
I0114 22:53:26.271567   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"df6262b0-8434-4657-8073-83348fcf8faa", APIVersion:"apps/v1", ResourceVersion:"2368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-kjkw9
... skipping 2 lines ...
apps.sh:389: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 2
(Bdeployment.apps/nginx-deployment env updated
I0114 22:53:26.513060   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2385", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 1
I0114 22:53:26.519400   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2388", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-c6d5c5c7b to 1
I0114 22:53:26.521749   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"df6262b0-8434-4657-8073-83348fcf8faa", APIVersion:"apps/v1", ResourceVersion:"2389", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-cpqfb
I0114 22:53:26.526560   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-c6d5c5c7b", UID:"8f87ad15-42ec-414c-b2c2-0e0b866059cc", APIVersion:"apps/v1", ResourceVersion:"2392", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-c6d5c5c7b-x4j2x
E0114 22:53:26.537848   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment env updated
I0114 22:53:26.649323   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2406", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 0
E0114 22:53:26.656893   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:53:26.660047   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-598d4d68b4", UID:"df6262b0-8434-4657-8073-83348fcf8faa", APIVersion:"apps/v1", ResourceVersion:"2410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-598d4d68b4-knzdw
I0114 22:53:26.662380   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2409", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5958f7687 to 1
I0114 22:53:26.666324   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-5958f7687", UID:"cff65b52-8976-4ed9-921a-ff99afd99e13", APIVersion:"apps/v1", ResourceVersion:"2414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5958f7687-f47mc
deployment.apps/nginx-deployment env updated
E0114 22:53:26.786679   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:26.906189   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment env updated
I0114 22:53:26.958491   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2426", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6b9f7756b4 to 0
I0114 22:53:26.972752   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-6b9f7756b4", UID:"3228ddb1-c73e-42ba-b7c7-d3810df05bf4", APIVersion:"apps/v1", ResourceVersion:"2431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6b9f7756b4-cdgch
deployment.apps/nginx-deployment env updated
I0114 22:53:27.110200   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment", UID:"5f01bede-b424-495e-ad51-a798b23b044a", APIVersion:"apps/v1", ResourceVersion:"2430", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-d74969475 to 1
I0114 22:53:27.114135   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042390-12063", Name:"nginx-deployment-d74969475", UID:"80332c22-1339-4f04-acf0-cb02dc4ff6eb", APIVersion:"apps/v1", ResourceVersion:"2439", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-d74969475-bg5n2
deployment.apps "nginx-deployment" deleted
configmap "test-set-env-config" deleted
E0114 22:53:27.256962   54738 replica_set.go:534] sync "namespace-1579042390-12063/nginx-deployment-d74969475" failed with replicasets.apps "nginx-deployment-d74969475" not found
secret "test-set-env-secret" deleted
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0114 22:53:27] Creating namespace namespace-1579042407-5053
E0114 22:53:27.539253   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042407-5053 created
E0114 22:53:27.658195   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:53:27] Testing kubectl(v1:replicasets)
E0114 22:53:27.788074   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:511: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:27.907439   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:53:28.021298   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"c0274271-6867-4f16-ac2d-f3f505a8f57d", APIVersion:"apps/v1", ResourceVersion:"2465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nk958
I0114 22:53:28.024273   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"c0274271-6867-4f16-ac2d-f3f505a8f57d", APIVersion:"apps/v1", ResourceVersion:"2465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-c82x5
I0114 22:53:28.025772   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"c0274271-6867-4f16-ac2d-f3f505a8f57d", APIVersion:"apps/v1", ResourceVersion:"2465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vh4zx
+++ [0114 22:53:28] Deleting rs
replicaset.apps "frontend" deleted
apps.sh:517: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:521: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:28.542158   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:53:28.617971   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"7dce5797-8c1b-4a9f-83e3-d50bbcf5ebc0", APIVersion:"apps/v1", ResourceVersion:"2481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ppqfv
I0114 22:53:28.621285   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"7dce5797-8c1b-4a9f-83e3-d50bbcf5ebc0", APIVersion:"apps/v1", ResourceVersion:"2481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-26wk6
I0114 22:53:28.622437   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"7dce5797-8c1b-4a9f-83e3-d50bbcf5ebc0", APIVersion:"apps/v1", ResourceVersion:"2481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nn4x2
E0114 22:53:28.659582   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:525: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0114 22:53:28] Deleting rs
E0114 22:53:28.789264   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "frontend" deleted
E0114 22:53:28.908809   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:529: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:531: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-26wk6" deleted
pod "frontend-nn4x2" deleted
pod "frontend-ppqfv" deleted
apps.sh:534: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:538: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:29.543482   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:53:29.632907   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"d451c511-bb23-4ea8-a943-089147de5087", APIVersion:"apps/v1", ResourceVersion:"2506", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bwfhx
I0114 22:53:29.635719   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"d451c511-bb23-4ea8-a943-089147de5087", APIVersion:"apps/v1", ResourceVersion:"2506", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-g9z29
I0114 22:53:29.636807   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"d451c511-bb23-4ea8-a943-089147de5087", APIVersion:"apps/v1", ResourceVersion:"2506", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-djwhp
E0114 22:53:29.660653   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:542: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0114 22:53:29.790479   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:53:29.848072   54738 horizontal.go:353] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1579042390-12063
matched Name:
matched Pod Template:
matched Labels:
matched Selector:
matched Replicas:
... skipping 4 lines ...
Namespace:    namespace-1579042407-5053
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-bwfhx
  Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-g9z29
  Normal  SuccessfulCreate  0s    replicaset-controller  Created pod: frontend-djwhp
(BE0114 22:53:29.910117   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:546: Successful describe
Name:         frontend
Namespace:    namespace-1579042407-5053
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1579042407-5053
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1579042407-5053
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1579042407-5053
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1579042407-5053
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-bwfhx
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-g9z29
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-djwhp
(BE0114 22:53:30.545287   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:30.661773   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1579042407-5053
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1579042407-5053
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-bwfhx
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-g9z29
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-djwhp
(BE0114 22:53:30.791770   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Image:
E0114 22:53:30.911401   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Node:
matched Labels:
matched Status:
matched Controlled By
Successful describe pods:
Name:           frontend-bwfhx
... skipping 86 lines ...
E0114 22:53:31.121072   54738 replica_set.go:199] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1579042407-5053 /apis/apps/v1/namespaces/namespace-1579042407-5053/replicasets/frontend d451c511-bb23-4ea8-a943-089147de5087 2515 2 2020-01-14 22:53:29 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v3 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00325a408 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0114 22:53:31.130480   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"d451c511-bb23-4ea8-a943-089147de5087", APIVersion:"apps/v1", ResourceVersion:"2515", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-bwfhx
apps.sh:568: Successful get rs frontend {{.spec.replicas}}: 2
(Bdeployment.apps/scale-1 created
I0114 22:53:31.427715   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042407-5053", Name:"scale-1", UID:"4c6e2d0b-8209-4357-bc21-b4989495a8e3", APIVersion:"apps/v1", ResourceVersion:"2523", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 1
I0114 22:53:31.430187   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-1-5c5565bcd9", UID:"84b38bc7-9699-41ee-9b25-27af6ba05cd6", APIVersion:"apps/v1", ResourceVersion:"2524", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-5ttc5
E0114 22:53:31.546671   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-2 created
I0114 22:53:31.642137   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042407-5053", Name:"scale-2", UID:"d7c6a412-da2a-40ec-9d4a-a101febe44dd", APIVersion:"apps/v1", ResourceVersion:"2534", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 1
I0114 22:53:31.645823   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-2-5c5565bcd9", UID:"15f4976c-9ae8-434d-a59f-24db6116afa5", APIVersion:"apps/v1", ResourceVersion:"2535", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-xrbhn
E0114 22:53:31.662620   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:31.793056   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-3 created
I0114 22:53:31.858849   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042407-5053", Name:"scale-3", UID:"dcf9f561-019e-437e-ba63-4f0717ada80e", APIVersion:"apps/v1", ResourceVersion:"2544", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 1
I0114 22:53:31.863600   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-3-5c5565bcd9", UID:"b98d2b5d-3187-4bf6-ab78-f1eb4a5336ab", APIVersion:"apps/v1", ResourceVersion:"2545", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-lqwvg
E0114 22:53:31.912586   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:574: Successful get deploy scale-1 {{.spec.replicas}}: 1
(Bapps.sh:575: Successful get deploy scale-2 {{.spec.replicas}}: 1
(Bapps.sh:576: Successful get deploy scale-3 {{.spec.replicas}}: 1
(Bdeployment.apps/scale-1 scaled
I0114 22:53:32.282876   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042407-5053", Name:"scale-1", UID:"4c6e2d0b-8209-4357-bc21-b4989495a8e3", APIVersion:"apps/v1", ResourceVersion:"2554", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 2
deployment.apps/scale-2 scaled
I0114 22:53:32.288565   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-1-5c5565bcd9", UID:"84b38bc7-9699-41ee-9b25-27af6ba05cd6", APIVersion:"apps/v1", ResourceVersion:"2555", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-scrgm
I0114 22:53:32.290661   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042407-5053", Name:"scale-2", UID:"d7c6a412-da2a-40ec-9d4a-a101febe44dd", APIVersion:"apps/v1", ResourceVersion:"2556", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 2
I0114 22:53:32.293849   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-2-5c5565bcd9", UID:"15f4976c-9ae8-434d-a59f-24db6116afa5", APIVersion:"apps/v1", ResourceVersion:"2560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-ghfxv
apps.sh:579: Successful get deploy scale-1 {{.spec.replicas}}: 2
(Bapps.sh:580: Successful get deploy scale-2 {{.spec.replicas}}: 2
(BE0114 22:53:32.548336   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:581: Successful get deploy scale-3 {{.spec.replicas}}: 1
(BE0114 22:53:32.664020   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-1 scaled
I0114 22:53:32.754803   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042407-5053", Name:"scale-1", UID:"4c6e2d0b-8209-4357-bc21-b4989495a8e3", APIVersion:"apps/v1", ResourceVersion:"2574", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 3
deployment.apps/scale-2 scaled
I0114 22:53:32.759711   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-1-5c5565bcd9", UID:"84b38bc7-9699-41ee-9b25-27af6ba05cd6", APIVersion:"apps/v1", ResourceVersion:"2575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-942dc
deployment.apps/scale-3 scaled
I0114 22:53:32.765278   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042407-5053", Name:"scale-2", UID:"d7c6a412-da2a-40ec-9d4a-a101febe44dd", APIVersion:"apps/v1", ResourceVersion:"2576", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 3
I0114 22:53:32.771675   54738 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1579042407-5053", Name:"scale-3", UID:"dcf9f561-019e-437e-ba63-4f0717ada80e", APIVersion:"apps/v1", ResourceVersion:"2580", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 3
I0114 22:53:32.771744   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-2-5c5565bcd9", UID:"15f4976c-9ae8-434d-a59f-24db6116afa5", APIVersion:"apps/v1", ResourceVersion:"2582", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-s44gq
I0114 22:53:32.778226   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-3-5c5565bcd9", UID:"b98d2b5d-3187-4bf6-ab78-f1eb4a5336ab", APIVersion:"apps/v1", ResourceVersion:"2585", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-khgmq
I0114 22:53:32.781864   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"scale-3-5c5565bcd9", UID:"b98d2b5d-3187-4bf6-ab78-f1eb4a5336ab", APIVersion:"apps/v1", ResourceVersion:"2585", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-pkxq8
E0114 22:53:32.793911   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:584: Successful get deploy scale-1 {{.spec.replicas}}: 3
(BE0114 22:53:32.913775   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:585: Successful get deploy scale-2 {{.spec.replicas}}: 3
(Bapps.sh:586: Successful get deploy scale-3 {{.spec.replicas}}: 3
(Breplicaset.apps "frontend" deleted
deployment.apps "scale-1" deleted
deployment.apps "scale-2" deleted
deployment.apps "scale-3" deleted
replicaset.apps/frontend created
I0114 22:53:33.544382   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"8e8656a4-360a-4816-aa79-87184f685b6d", APIVersion:"apps/v1", ResourceVersion:"2637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m6d7r
I0114 22:53:33.547692   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"8e8656a4-360a-4816-aa79-87184f685b6d", APIVersion:"apps/v1", ResourceVersion:"2637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-75m4s
I0114 22:53:33.548040   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"8e8656a4-360a-4816-aa79-87184f685b6d", APIVersion:"apps/v1", ResourceVersion:"2637", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7hph8
E0114 22:53:33.549427   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0114 22:53:33.665196   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:594: Successful get rs frontend {{.spec.replicas}}: 3
(Bservice/frontend exposed
E0114 22:53:33.795214   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:598: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BE0114 22:53:33.914990   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend-2 exposed
apps.sh:602: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
(Bservice "frontend" deleted
service "frontend-2" deleted
apps.sh:608: Successful get rs frontend {{.metadata.generation}}: 1
(Breplicaset.apps/frontend image updated
E0114 22:53:34.550632   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:610: Successful get rs frontend {{.metadata.generation}}: 2
(Breplicaset.apps/frontend env updated
E0114 22:53:34.666168   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:612: Successful get rs frontend {{.metadata.generation}}: 3
(BE0114 22:53:34.796610   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend resource requirements updated
E0114 22:53:34.916427   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:614: Successful get rs frontend {{.metadata.generation}}: 4
(Bapps.sh:618: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(Breplicaset.apps "frontend" deleted
apps.sh:622: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:626: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:35.552055   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:53:35.589434   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"0264fec0-2a29-43ec-9c44-16ead2267af5", APIVersion:"apps/v1", ResourceVersion:"2673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-sc4r5
I0114 22:53:35.594345   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"0264fec0-2a29-43ec-9c44-16ead2267af5", APIVersion:"apps/v1", ResourceVersion:"2673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j5dj2
I0114 22:53:35.594396   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"0264fec0-2a29-43ec-9c44-16ead2267af5", APIVersion:"apps/v1", ResourceVersion:"2673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l9hwm
E0114 22:53:35.667465   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/redis-slave created
E0114 22:53:35.797714   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0114 22:53:35.799169   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"redis-slave", UID:"7054b152-e748-4d27-9b5d-22d81aba03cc", APIVersion:"apps/v1", ResourceVersion:"2682", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-c2mv9
I0114 22:53:35.802987   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"redis-slave", UID:"7054b152-e748-4d27-9b5d-22d81aba03cc", APIVersion:"apps/v1", ResourceVersion:"2682", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-f2hqr
E0114 22:53:35.917818   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:631: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Bapps.sh:635: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Breplicaset.apps "frontend" deleted
replicaset.apps "redis-slave" deleted
apps.sh:639: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:644: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:36.553448   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0114 22:53:36.622773   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"37aa17c6-6f7e-48a7-9879-1999de48c30d", APIVersion:"apps/v1", ResourceVersion:"2702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9kk7j
I0114 22:53:36.625859   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"37aa17c6-6f7e-48a7-9879-1999de48c30d", APIVersion:"apps/v1", ResourceVersion:"2702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9625m
I0114 22:53:36.626927   54738 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1579042407-5053", Name:"frontend", UID:"37aa17c6-6f7e-48a7-9879-1999de48c30d", APIVersion:"apps/v1", ResourceVersion:"2702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h9bvr
E0114 22:53:36.668855   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:647: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(BE0114 22:53:36.798999   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling/frontend autoscaled
E0114 22:53:36.919078   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:650: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:654: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
E0114 22:53:37.554676   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "frontend" deleted
+++ exit code: 0
E0114 22:53:37.670065   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_stateful_set_tests
+++ [0114 22:53:37] Creating namespace namespace-1579042417-18588
E0114 22:53:37.800487   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042417-18588 created
E0114 22:53:37.920790   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:53:37] Testing kubectl(v1:statefulsets)
apps.sh:470: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0114 22:53:38.282693   51297 controller.go:606] quota admission added evaluator for: statefulsets.apps
statefulset.apps/nginx created
apps.sh:476: Successful get statefulset nginx {{.spec.replicas}}: 0
(Bapps.sh:477: Successful get statefulset nginx {{.status.observedGeneration}}: 1
(BE0114 22:53:38.556092   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx scaled
I0114 22:53:38.635484   54738 event.go:278] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"namespace-1579042417-18588", Name:"nginx", UID:"4f500bf6-6663-424a-9792-4a91a8b2a532", APIVersion:"apps/v1", ResourceVersion:"2729", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod nginx-0 in StatefulSet nginx successful
E0114 22:53:38.671466   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:481: Successful get statefulset nginx {{.spec.replicas}}: 1
(BE0114 22:53:38.801815   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:482: Successful get statefulset nginx {{.status.observedGeneration}}: 2
(BE0114 22:53:38.922035   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx restarted
apps.sh:490: Successful get statefulset nginx {{.status.observedGeneration}}: 3
(Bstatefulset.apps "nginx" deleted
I0114 22:53:39.286627   54738 stateful_set.go:420] StatefulSet has been deleted namespace-1579042417-18588/nginx
+++ exit code: 0
Recording: run_statefulset_history_tests
Running command: run_statefulset_history_tests

+++ Running case: test-cmd.run_statefulset_history_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_statefulset_history_tests
+++ [0114 22:53:39] Creating namespace namespace-1579042419-1876
E0114 22:53:39.557407   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1579042419-1876 created
E0114 22:53:39.672732   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [0114 22:53:39] Testing kubectl(v1:statefulsets, v1:controllerrevisions)
E0114 22:53:39.803002   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:418: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0114 22:53:39.923289   54738 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx created
apps.sh:422: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --ser