This job view page is being replaced by Spyglass soon. Check out the new job view.
PRaojea: Be more agressive acquiring the iptables lock
ResultFAILURE
Tests 1 failed / 2899 succeeded
Started2019-12-02 23:30
Elapsed27m29s
Revision073564a55c0badf9f3d86a1ce43ca470b0bafd25
Refs 85771

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestPreemption 36s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestPreemption$
=== RUN   TestPreemption
W1202 23:53:24.185159  109541 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1202 23:53:24.185179  109541 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I1202 23:53:24.185194  109541 master.go:311] Node port range unspecified. Defaulting to 30000-32767.
I1202 23:53:24.185208  109541 master.go:267] Using reconciler: 
I1202 23:53:24.187190  109541 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.189114  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.189171  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.190412  109541 store.go:1350] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1202 23:53:24.190505  109541 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1202 23:53:24.190486  109541 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.191801  109541 watch_cache.go:409] Replace watchCache (rev: 30792) 
I1202 23:53:24.192028  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.192062  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.193131  109541 store.go:1350] Monitoring events count at <storage-prefix>//events
I1202 23:53:24.193219  109541 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1202 23:53:24.193201  109541 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.194447  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.194477  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.195573  109541 store.go:1350] Monitoring limitranges count at <storage-prefix>//limitranges
I1202 23:53:24.195822  109541 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.196224  109541 watch_cache.go:409] Replace watchCache (rev: 30793) 
I1202 23:53:24.196525  109541 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1202 23:53:24.197113  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.197143  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.198122  109541 watch_cache.go:409] Replace watchCache (rev: 30793) 
I1202 23:53:24.198620  109541 store.go:1350] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1202 23:53:24.198798  109541 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1202 23:53:24.198840  109541 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.200839  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.200896  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.201638  109541 watch_cache.go:409] Replace watchCache (rev: 30794) 
I1202 23:53:24.201734  109541 store.go:1350] Monitoring secrets count at <storage-prefix>//secrets
I1202 23:53:24.201788  109541 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1202 23:53:24.201992  109541 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.202794  109541 watch_cache.go:409] Replace watchCache (rev: 30794) 
I1202 23:53:24.203438  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.203473  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.204123  109541 store.go:1350] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1202 23:53:24.204338  109541 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.205780  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.205809  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.206038  109541 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1202 23:53:24.206932  109541 watch_cache.go:409] Replace watchCache (rev: 30795) 
I1202 23:53:24.207046  109541 store.go:1350] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1202 23:53:24.207131  109541 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1202 23:53:24.207264  109541 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.208229  109541 watch_cache.go:409] Replace watchCache (rev: 30795) 
I1202 23:53:24.209364  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.209395  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.210501  109541 store.go:1350] Monitoring configmaps count at <storage-prefix>//configmaps
I1202 23:53:24.210579  109541 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1202 23:53:24.210825  109541 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.211521  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.211639  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.212132  109541 watch_cache.go:409] Replace watchCache (rev: 30796) 
I1202 23:53:24.212391  109541 store.go:1350] Monitoring namespaces count at <storage-prefix>//namespaces
I1202 23:53:24.212415  109541 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1202 23:53:24.212593  109541 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.212803  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.212830  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.213568  109541 store.go:1350] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1202 23:53:24.213678  109541 watch_cache.go:409] Replace watchCache (rev: 30797) 
I1202 23:53:24.213783  109541 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.213751  109541 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1202 23:53:24.214032  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.214076  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.215118  109541 watch_cache.go:409] Replace watchCache (rev: 30797) 
I1202 23:53:24.215792  109541 store.go:1350] Monitoring nodes count at <storage-prefix>//minions
I1202 23:53:24.215921  109541 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I1202 23:53:24.216046  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.216234  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.216253  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.217507  109541 watch_cache.go:409] Replace watchCache (rev: 30797) 
I1202 23:53:24.217614  109541 store.go:1350] Monitoring pods count at <storage-prefix>//pods
I1202 23:53:24.217642  109541 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I1202 23:53:24.217986  109541 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.218181  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.218201  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.219208  109541 watch_cache.go:409] Replace watchCache (rev: 30797) 
I1202 23:53:24.219531  109541 store.go:1350] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1202 23:53:24.219610  109541 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1202 23:53:24.220046  109541 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.220429  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.220568  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.220960  109541 watch_cache.go:409] Replace watchCache (rev: 30797) 
I1202 23:53:24.223048  109541 store.go:1350] Monitoring services count at <storage-prefix>//services/specs
I1202 23:53:24.223146  109541 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.223569  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.223630  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.223745  109541 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1202 23:53:24.224938  109541 watch_cache.go:409] Replace watchCache (rev: 30798) 
I1202 23:53:24.225524  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.225551  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.227025  109541 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.227356  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.227391  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.228219  109541 store.go:1350] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1202 23:53:24.228248  109541 rest.go:113] the default service ipfamily for this cluster is: IPv4
I1202 23:53:24.228334  109541 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1202 23:53:24.228875  109541 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.229072  109541 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.229713  109541 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.229859  109541 watch_cache.go:409] Replace watchCache (rev: 30798) 
I1202 23:53:24.230367  109541 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.231018  109541 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.231647  109541 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.232528  109541 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.232701  109541 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.232954  109541 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.233454  109541 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.234008  109541 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.234202  109541 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.234924  109541 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.235369  109541 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.236002  109541 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.236298  109541 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.237021  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.237517  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.237657  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.237783  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.238022  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.238311  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.240545  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.241505  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.243166  109541 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.244202  109541 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.244961  109541 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.247511  109541 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.248053  109541 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.249698  109541 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.251777  109541 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.252684  109541 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.256876  109541 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.257841  109541 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.258733  109541 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.259198  109541 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.259420  109541 master.go:496] Skipping disabled API group "auditregistration.k8s.io".
I1202 23:53:24.259444  109541 master.go:507] Enabling API group "authentication.k8s.io".
I1202 23:53:24.259462  109541 master.go:507] Enabling API group "authorization.k8s.io".
I1202 23:53:24.259671  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.259951  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.261402  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.262948  109541 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1202 23:53:24.263191  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.263259  109541 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1202 23:53:24.263448  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.263481  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.264485  109541 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1202 23:53:24.264581  109541 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1202 23:53:24.264715  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.264975  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.265013  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.265706  109541 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1202 23:53:24.265735  109541 master.go:507] Enabling API group "autoscaling".
I1202 23:53:24.265784  109541 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1202 23:53:24.266001  109541 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.266210  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.266234  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.267046  109541 store.go:1350] Monitoring jobs.batch count at <storage-prefix>//jobs
I1202 23:53:24.267269  109541 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1202 23:53:24.267259  109541 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.267643  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.267670  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.269352  109541 watch_cache.go:409] Replace watchCache (rev: 30802) 
I1202 23:53:24.269443  109541 watch_cache.go:409] Replace watchCache (rev: 30802) 
I1202 23:53:24.269452  109541 watch_cache.go:409] Replace watchCache (rev: 30802) 
I1202 23:53:24.269662  109541 watch_cache.go:409] Replace watchCache (rev: 30802) 
I1202 23:53:24.271615  109541 store.go:1350] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1202 23:53:24.271648  109541 master.go:507] Enabling API group "batch".
I1202 23:53:24.271706  109541 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1202 23:53:24.271897  109541 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.272141  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.272184  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.273592  109541 watch_cache.go:409] Replace watchCache (rev: 30803) 
I1202 23:53:24.274587  109541 store.go:1350] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1202 23:53:24.274615  109541 master.go:507] Enabling API group "certificates.k8s.io".
I1202 23:53:24.274817  109541 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.274962  109541 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1202 23:53:24.275303  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.275326  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.277316  109541 watch_cache.go:409] Replace watchCache (rev: 30803) 
I1202 23:53:24.277569  109541 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1202 23:53:24.277681  109541 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1202 23:53:24.277817  109541 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.279097  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.279137  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.280352  109541 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1202 23:53:24.280379  109541 master.go:507] Enabling API group "coordination.k8s.io".
I1202 23:53:24.280434  109541 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1202 23:53:24.280596  109541 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.281131  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.281162  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.281274  109541 watch_cache.go:409] Replace watchCache (rev: 30803) 
I1202 23:53:24.282050  109541 store.go:1350] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I1202 23:53:24.282076  109541 master.go:507] Enabling API group "discovery.k8s.io".
I1202 23:53:24.282279  109541 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.282467  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.282490  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.282516  109541 reflector.go:188] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I1202 23:53:24.282616  109541 watch_cache.go:409] Replace watchCache (rev: 30803) 
I1202 23:53:24.283543  109541 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1202 23:53:24.283579  109541 master.go:507] Enabling API group "extensions".
I1202 23:53:24.283590  109541 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1202 23:53:24.283794  109541 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.283824  109541 watch_cache.go:409] Replace watchCache (rev: 30803) 
I1202 23:53:24.284148  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.284188  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.284607  109541 watch_cache.go:409] Replace watchCache (rev: 30803) 
I1202 23:53:24.285113  109541 store.go:1350] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1202 23:53:24.285193  109541 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1202 23:53:24.285338  109541 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.285594  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.285623  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.286442  109541 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1202 23:53:24.286443  109541 watch_cache.go:409] Replace watchCache (rev: 30803) 
I1202 23:53:24.286473  109541 master.go:507] Enabling API group "networking.k8s.io".
I1202 23:53:24.286727  109541 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1202 23:53:24.287297  109541 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.287615  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.287641  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.288540  109541 watch_cache.go:409] Replace watchCache (rev: 30803) 
I1202 23:53:24.288745  109541 store.go:1350] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1202 23:53:24.288773  109541 master.go:507] Enabling API group "node.k8s.io".
I1202 23:53:24.288786  109541 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1202 23:53:24.289010  109541 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.289225  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.289254  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.289634  109541 watch_cache.go:409] Replace watchCache (rev: 30804) 
I1202 23:53:24.289887  109541 store.go:1350] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1202 23:53:24.289923  109541 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1202 23:53:24.290087  109541 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.290335  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.290369  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.291793  109541 store.go:1350] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1202 23:53:24.291822  109541 master.go:507] Enabling API group "policy".
I1202 23:53:24.291883  109541 watch_cache.go:409] Replace watchCache (rev: 30804) 
I1202 23:53:24.291903  109541 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.292124  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.292146  109541 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1202 23:53:24.292161  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.292972  109541 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1202 23:53:24.293094  109541 watch_cache.go:409] Replace watchCache (rev: 30804) 
I1202 23:53:24.293191  109541 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.293291  109541 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1202 23:53:24.293430  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.293451  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.294318  109541 watch_cache.go:409] Replace watchCache (rev: 30804) 
I1202 23:53:24.294648  109541 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1202 23:53:24.294712  109541 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1202 23:53:24.294721  109541 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.294952  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.294990  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.295467  109541 watch_cache.go:409] Replace watchCache (rev: 30804) 
I1202 23:53:24.295661  109541 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1202 23:53:24.295869  109541 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1202 23:53:24.295914  109541 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.296162  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.296204  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.296688  109541 watch_cache.go:409] Replace watchCache (rev: 30804) 
I1202 23:53:24.297012  109541 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1202 23:53:24.297080  109541 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1202 23:53:24.297099  109541 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.297319  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.297355  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.297747  109541 watch_cache.go:409] Replace watchCache (rev: 30804) 
I1202 23:53:24.297898  109541 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1202 23:53:24.298021  109541 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1202 23:53:24.298114  109541 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.298340  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.298373  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.299282  109541 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1202 23:53:24.299345  109541 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.299448  109541 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1202 23:53:24.299552  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.299581  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.299707  109541 watch_cache.go:409] Replace watchCache (rev: 30805) 
I1202 23:53:24.300397  109541 watch_cache.go:409] Replace watchCache (rev: 30805) 
I1202 23:53:24.301447  109541 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1202 23:53:24.301658  109541 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.301791  109541 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1202 23:53:24.301913  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.301938  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.302730  109541 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1202 23:53:24.302770  109541 master.go:507] Enabling API group "rbac.authorization.k8s.io".
I1202 23:53:24.303013  109541 watch_cache.go:409] Replace watchCache (rev: 30805) 
I1202 23:53:24.303333  109541 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1202 23:53:24.304234  109541 watch_cache.go:409] Replace watchCache (rev: 30805) 
I1202 23:53:24.304926  109541 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.305169  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.305194  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.306192  109541 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1202 23:53:24.306228  109541 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1202 23:53:24.307386  109541 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.307582  109541 watch_cache.go:409] Replace watchCache (rev: 30805) 
I1202 23:53:24.307642  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.307663  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.308672  109541 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1202 23:53:24.308700  109541 master.go:507] Enabling API group "scheduling.k8s.io".
I1202 23:53:24.308740  109541 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1202 23:53:24.308834  109541 master.go:496] Skipping disabled API group "settings.k8s.io".
I1202 23:53:24.309081  109541 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.309283  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.309314  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.311036  109541 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1202 23:53:24.311143  109541 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1202 23:53:24.311254  109541 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.312468  109541 watch_cache.go:409] Replace watchCache (rev: 30806) 
I1202 23:53:24.312677  109541 watch_cache.go:409] Replace watchCache (rev: 30806) 
I1202 23:53:24.313213  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.313252  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.314080  109541 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1202 23:53:24.314351  109541 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1202 23:53:24.314369  109541 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.315150  109541 watch_cache.go:409] Replace watchCache (rev: 30806) 
I1202 23:53:24.315719  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.315748  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.316690  109541 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1202 23:53:24.316893  109541 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1202 23:53:24.316962  109541 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.318261  109541 watch_cache.go:409] Replace watchCache (rev: 30806) 
I1202 23:53:24.318341  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.318371  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.319212  109541 store.go:1350] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1202 23:53:24.319296  109541 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1202 23:53:24.319447  109541 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.319671  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.319729  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.320529  109541 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1202 23:53:24.320759  109541 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.320808  109541 watch_cache.go:409] Replace watchCache (rev: 30806) 
I1202 23:53:24.320888  109541 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1202 23:53:24.321046  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.321082  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.321987  109541 watch_cache.go:409] Replace watchCache (rev: 30806) 
I1202 23:53:24.322490  109541 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1202 23:53:24.322600  109541 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1202 23:53:24.322710  109541 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.322964  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.322996  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.323730  109541 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1202 23:53:24.323762  109541 master.go:507] Enabling API group "storage.k8s.io".
I1202 23:53:24.323781  109541 master.go:496] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I1202 23:53:24.324005  109541 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1202 23:53:24.324029  109541 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.324248  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.324276  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.324302  109541 watch_cache.go:409] Replace watchCache (rev: 30806) 
I1202 23:53:24.325341  109541 watch_cache.go:409] Replace watchCache (rev: 30807) 
I1202 23:53:24.325438  109541 store.go:1350] Monitoring deployments.apps count at <storage-prefix>//deployments
I1202 23:53:24.325502  109541 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1202 23:53:24.325772  109541 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.326144  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.326274  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.326782  109541 watch_cache.go:409] Replace watchCache (rev: 30807) 
I1202 23:53:24.327090  109541 store.go:1350] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1202 23:53:24.327245  109541 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1202 23:53:24.327314  109541 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.327646  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.327679  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.328130  109541 watch_cache.go:409] Replace watchCache (rev: 30807) 
I1202 23:53:24.328548  109541 store.go:1350] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1202 23:53:24.328654  109541 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1202 23:53:24.328803  109541 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.329083  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.329109  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.329946  109541 watch_cache.go:409] Replace watchCache (rev: 30807) 
I1202 23:53:24.330248  109541 store.go:1350] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1202 23:53:24.330318  109541 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1202 23:53:24.330491  109541 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.330686  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.330710  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.331549  109541 store.go:1350] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1202 23:53:24.331581  109541 master.go:507] Enabling API group "apps".
I1202 23:53:24.331611  109541 watch_cache.go:409] Replace watchCache (rev: 30807) 
I1202 23:53:24.331800  109541 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.331895  109541 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1202 23:53:24.332056  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.332085  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.332747  109541 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1202 23:53:24.332787  109541 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1202 23:53:24.332962  109541 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.333162  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.333194  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.333713  109541 watch_cache.go:409] Replace watchCache (rev: 30808) 
I1202 23:53:24.333924  109541 watch_cache.go:409] Replace watchCache (rev: 30808) 
I1202 23:53:24.334023  109541 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1202 23:53:24.334082  109541 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1202 23:53:24.334208  109541 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.334420  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.334447  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.335016  109541 watch_cache.go:409] Replace watchCache (rev: 30808) 
I1202 23:53:24.335036  109541 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1202 23:53:24.335126  109541 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1202 23:53:24.335227  109541 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.335434  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.335461  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.336037  109541 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1202 23:53:24.336060  109541 master.go:507] Enabling API group "admissionregistration.k8s.io".
I1202 23:53:24.336088  109541 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1202 23:53:24.336107  109541 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.336393  109541 watch_cache.go:409] Replace watchCache (rev: 30808) 
I1202 23:53:24.336431  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:24.336495  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:24.336737  109541 watch_cache.go:409] Replace watchCache (rev: 30808) 
I1202 23:53:24.337253  109541 store.go:1350] Monitoring events count at <storage-prefix>//events
I1202 23:53:24.337275  109541 master.go:507] Enabling API group "events.k8s.io".
I1202 23:53:24.337321  109541 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1202 23:53:24.337529  109541 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.337799  109541 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.337983  109541 watch_cache.go:409] Replace watchCache (rev: 30808) 
I1202 23:53:24.338146  109541 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.338308  109541 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.338461  109541 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.338600  109541 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.338809  109541 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.338947  109541 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.339045  109541 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.339153  109541 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.340078  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.340349  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.341120  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.341384  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.342415  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.342786  109541 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.343952  109541 request.go:853] Got a Retry-After 1s response for attempt 5 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:24.344197  109541 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.344561  109541 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.345648  109541 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.346082  109541 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1202 23:53:24.346286  109541 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1202 23:53:24.347180  109541 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.348450  109541 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.348875  109541 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.349801  109541 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.350992  109541 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.352249  109541 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1202 23:53:24.352632  109541 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I1202 23:53:24.353709  109541 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.354198  109541 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.355689  109541 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.356877  109541 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.357494  109541 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.358546  109541 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1202 23:53:24.358811  109541 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1202 23:53:24.359940  109541 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.360368  109541 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.361102  109541 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.362169  109541 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.362836  109541 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.363912  109541 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.364757  109541 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.365736  109541 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.366438  109541 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.367694  109541 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.368591  109541 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1202 23:53:24.368825  109541 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1202 23:53:24.369693  109541 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.370721  109541 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1202 23:53:24.371019  109541 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1202 23:53:24.371824  109541 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.373034  109541 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.373850  109541 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.374398  109541 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.375361  109541 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.376745  109541 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.377452  109541 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.378500  109541 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1202 23:53:24.378758  109541 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1202 23:53:24.379753  109541 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.380610  109541 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.381355  109541 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.382280  109541 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.382722  109541 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.383400  109541 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.384332  109541 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.384765  109541 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.385190  109541 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.387619  109541 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.388182  109541 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.388708  109541 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1202 23:53:24.388914  109541 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1202 23:53:24.389005  109541 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1202 23:53:24.389840  109541 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.390662  109541 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.391714  109541 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.392691  109541 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1202 23:53:24.393624  109541 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"ca6c54ff-0245-4d5f-a410-1ec043f48099", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1202 23:53:24.397730  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1202 23:53:24.397882  109541 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1202 23:53:24.397894  109541 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1202 23:53:24.398107  109541 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1202 23:53:24.398124  109541 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1202 23:53:24.398193  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.398213  109541 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1202 23:53:24.398224  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.398236  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.398244  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.398272  109541 httplog.go:90] GET /healthz: (213.984µs) 0 [Go-http-client/1.1 127.0.0.1:51070]
I1202 23:53:24.398984  109541 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (456.982µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51070]
I1202 23:53:24.399578  109541 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.421593ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:24.399618  109541 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=30796 labels= fields= timeout=9m51s
I1202 23:53:24.402927  109541 httplog.go:90] GET /api/v1/services: (1.859363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:24.407710  109541 httplog.go:90] GET /api/v1/services: (1.102322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:24.409964  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.410001  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.410013  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.410022  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.410058  109541 httplog.go:90] GET /healthz: (244.067µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:24.411602  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.383829ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I1202 23:53:24.412015  109541 httplog.go:90] GET /api/v1/services: (1.000129ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:24.413189  109541 httplog.go:90] GET /api/v1/services: (1.583923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.413536  109541 httplog.go:90] POST /api/v1/namespaces: (1.465474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51074]
I1202 23:53:24.415466  109541 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.431218ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.417468  109541 httplog.go:90] POST /api/v1/namespaces: (1.597749ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.419251  109541 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.404603ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.421101  109541 httplog.go:90] POST /api/v1/namespaces: (1.453454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.498086  109541 shared_informer.go:227] caches populated
I1202 23:53:24.498124  109541 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I1202 23:53:24.498917  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.498943  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.498954  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.498964  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.498998  109541 httplog.go:90] GET /healthz: (224.827µs) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:24.511633  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.511678  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.511691  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.511702  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.511737  109541 httplog.go:90] GET /healthz: (250.451µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.599036  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.599077  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.599091  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.599099  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.599132  109541 httplog.go:90] GET /healthz: (271.792µs) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:24.610853  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.610975  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.610986  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.610995  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.611047  109541 httplog.go:90] GET /healthz: (339.582µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.699040  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.699076  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.699087  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.699114  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.699145  109541 httplog.go:90] GET /healthz: (264.666µs) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:24.711096  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.711363  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.711387  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.711505  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.711554  109541 httplog.go:90] GET /healthz: (850.215µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.799021  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.799056  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.799066  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.799074  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.799104  109541 httplog.go:90] GET /healthz: (271.27µs) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:24.811057  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.811092  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.811105  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.811115  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.811153  109541 httplog.go:90] GET /healthz: (424.204µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.898994  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.899038  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.899054  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.899066  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.899109  109541 httplog.go:90] GET /healthz: (264.361µs) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:24.910810  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.910846  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.910911  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.910921  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.910954  109541 httplog.go:90] GET /healthz: (288.953µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:24.999874  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:24.999910  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:24.999921  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:24.999931  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:24.999975  109541 httplog.go:90] GET /healthz: (283.124µs) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:25.010821  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:25.010991  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.011008  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:25.011017  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.011051  109541 httplog.go:90] GET /healthz: (384.426µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:25.100976  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:25.101006  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.101018  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:25.101027  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.101060  109541 httplog.go:90] GET /healthz: (262.767µs) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:25.111336  109541 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1202 23:53:25.111373  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.111386  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:25.111397  109541 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.111434  109541 httplog.go:90] GET /healthz: (660.935µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:25.186872  109541 client.go:361] parsed scheme: "endpoint"
I1202 23:53:25.186967  109541 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:53:25.202843  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.202882  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:25.202892  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.202939  109541 httplog.go:90] GET /healthz: (3.79441ms) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:25.212500  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.212533  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:25.212544  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.212587  109541 httplog.go:90] GET /healthz: (1.841289ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:25.300343  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.300376  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:25.300386  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.300443  109541 httplog.go:90] GET /healthz: (1.359672ms) 0 [Go-http-client/1.1 127.0.0.1:51078]
I1202 23:53:25.312169  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.312198  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:25.312209  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.312252  109541 httplog.go:90] GET /healthz: (1.500577ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:25.344469  109541 request.go:853] Got a Retry-After 1s response for attempt 6 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:25.403384  109541 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (5.272442ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:25.403591  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.403637  109541 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1202 23:53:25.403648  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.403703  109541 httplog.go:90] GET /healthz: (4.688804ms) 0 [Go-http-client/1.1 127.0.0.1:51206]
I1202 23:53:25.404132  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (6.026205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.408270  109541 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (3.773474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51078]
I1202 23:53:25.408421  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.722803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.408709  109541 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I1202 23:53:25.410148  109541 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (1.191775ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51206]
I1202 23:53:25.410149  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.313608ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.412362  109541 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (1.850826ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51206]
I1202 23:53:25.412606  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.778577ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.412825  109541 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I1202 23:53:25.412848  109541 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1202 23:53:25.412953  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.412974  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.413004  109541 httplog.go:90] GET /healthz: (2.52426ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.413840  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (835.168µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51206]
I1202 23:53:25.415165  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (925.981µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.416413  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (892.465µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.418031  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.271827ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.423458  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.29632ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.425244  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.308294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.427907  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.116669ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.428212  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1202 23:53:25.429573  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.146454ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.431975  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.947852ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.432247  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1202 23:53:25.433389  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (934.923µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.436178  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.272032ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.436450  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1202 23:53:25.437636  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (981.054µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.440848  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.814561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.441080  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1202 23:53:25.442447  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.16709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.445041  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.117268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.445287  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1202 23:53:25.446506  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.039812ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.448670  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.754076ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.448905  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1202 23:53:25.450043  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (952.049µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.452372  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.945937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.452610  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1202 23:53:25.453875  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.055025ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.455766  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.517147ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.456376  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1202 23:53:25.457471  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (884.806µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.460064  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.914912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.460306  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1202 23:53:25.463970  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (3.310961ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.466854  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.185743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.467344  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1202 23:53:25.468686  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.120293ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.471248  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.089115ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.471630  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1202 23:53:25.474140  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (2.286745ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.478195  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.097622ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.478548  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1202 23:53:25.479791  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.00277ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.483353  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.059937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.483793  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1202 23:53:25.485237  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.175341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.487952  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.183836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.488242  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1202 23:53:25.489753  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.262365ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.492730  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.503529ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.492988  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1202 23:53:25.494784  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.588329ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.500241  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.831676ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.500541  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1202 23:53:25.500726  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.500748  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.500781  109541 httplog.go:90] GET /healthz: (1.923415ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:25.501795  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.01388ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.505241  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.969111ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.505510  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1202 23:53:25.507439  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.664532ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.512350  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.512378  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.512424  109541 httplog.go:90] GET /healthz: (1.497624ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.513491  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.509058ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.514018  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1202 23:53:25.515371  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.136262ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.517876  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.068208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.518087  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1202 23:53:25.519550  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.234377ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.522176  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.95975ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.522420  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1202 23:53:25.524127  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.48644ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.526514  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.880509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.526744  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1202 23:53:25.528262  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.100108ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.531527  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.326297ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.532031  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1202 23:53:25.533297  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.066546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.536059  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.360637ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.536273  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1202 23:53:25.543648  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.49586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.548852  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.356301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.549389  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1202 23:53:25.551188  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.412699ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.555222  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.448112ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.555643  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1202 23:53:25.557315  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.249332ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.559804  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.018439ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.560080  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1202 23:53:25.561301  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.018231ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.564748  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.38957ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.564985  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1202 23:53:25.569058  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (3.858892ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.571792  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.196799ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.572067  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1202 23:53:25.577349  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (2.589813ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.581115  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.091698ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.581816  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1202 23:53:25.584087  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.968692ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.587799  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.956226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.588107  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1202 23:53:25.590309  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.955444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.593619  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.538466ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.594100  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1202 23:53:25.595982  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.464972ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.600293  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.318971ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.600466  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.600505  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.600522  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1202 23:53:25.600541  109541 httplog.go:90] GET /healthz: (1.813305ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:25.601792  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.116995ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.604602  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.337454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.604963  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1202 23:53:25.608591  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (3.402139ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.611673  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.503707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.612268  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1202 23:53:25.616569  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (4.003076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.616585  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.616614  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.616675  109541 httplog.go:90] GET /healthz: (5.141611ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.620332  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.916272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.620720  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1202 23:53:25.622907  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.954475ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.625959  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.30387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.626360  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1202 23:53:25.627806  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.212241ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.630758  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.295182ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.633780  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1202 23:53:25.637081  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (2.97013ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.641779  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.964591ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.642274  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1202 23:53:25.643848  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.277205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.647332  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.814139ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.647625  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1202 23:53:25.649174  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.32004ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.651772  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.105606ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.652046  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1202 23:53:25.654800  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (2.494791ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.658210  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.644832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.658494  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1202 23:53:25.660458  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.732966ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.663016  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.018939ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.663274  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1202 23:53:25.664742  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.229768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.667065  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.896838ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.667277  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1202 23:53:25.668849  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.316697ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.672346  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.081671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.672820  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1202 23:53:25.675019  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.56168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.677782  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.078943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.678150  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1202 23:53:25.680171  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.82199ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.683746  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.986661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.684069  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1202 23:53:25.685821  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.428417ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.688844  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.53064ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.690397  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1202 23:53:25.699320  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (8.677182ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.700606  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.700642  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.700692  109541 httplog.go:90] GET /healthz: (1.461091ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:25.704427  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.215236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.704645  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1202 23:53:25.706291  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.436734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.709099  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.285805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.709347  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1202 23:53:25.710986  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.2605ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.711512  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.711535  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.711565  109541 httplog.go:90] GET /healthz: (978.471µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.713492  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.063841ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.714014  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1202 23:53:25.715168  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (980.122µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.723118  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (7.205577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.723497  109541 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1202 23:53:25.725041  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.321217ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.727563  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.977762ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.727905  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1202 23:53:25.729345  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.136568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.732599  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.75796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.732839  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1202 23:53:25.734469  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.307958ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.741711  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.559952ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.742314  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1202 23:53:25.761053  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.822471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.780924  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.634568ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.781218  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1202 23:53:25.800701  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.800728  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.800795  109541 httplog.go:90] GET /healthz: (2.109773ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:25.801587  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (3.299003ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.812167  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.812200  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.812243  109541 httplog.go:90] GET /healthz: (1.50233ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.824561  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.326168ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.824956  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1202 23:53:25.841038  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (2.692583ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.865517  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.189603ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.866055  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1202 23:53:25.889104  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (10.867727ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.904601  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.904632  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.904683  109541 httplog.go:90] GET /healthz: (1.288144ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:25.905627  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.588363ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.905938  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1202 23:53:25.921084  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.065965ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:25.928524  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:25.928566  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:25.928612  109541 httplog.go:90] GET /healthz: (3.419442ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.943826  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.543686ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.944357  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1202 23:53:25.962231  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (3.96946ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.982605  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.916403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:25.984269  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1202 23:53:26.001424  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.001460  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.001516  109541 httplog.go:90] GET /healthz: (2.765369ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:26.001710  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (3.496517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.012121  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.012155  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.012200  109541 httplog.go:90] GET /healthz: (1.452534ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.022042  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.943285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.022637  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1202 23:53:26.041563  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.360338ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.061443  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.218346ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.061733  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1202 23:53:26.082203  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (3.966487ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.101205  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.030849ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.101745  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.101769  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.101803  109541 httplog.go:90] GET /healthz: (3.005797ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:26.101817  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1202 23:53:26.111914  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.111947  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.111997  109541 httplog.go:90] GET /healthz: (1.322253ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.122079  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (3.85843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.142171  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.895355ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.142908  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1202 23:53:26.160422  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.899971ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.181322  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.04045ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.181732  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1202 23:53:26.202634  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.202667  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.202707  109541 httplog.go:90] GET /healthz: (2.486746ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:26.203000  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (3.440078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.212059  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.212309  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.212534  109541 httplog.go:90] GET /healthz: (1.778771ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.221398  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.012916ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.221967  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1202 23:53:26.242033  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (3.290996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.262354  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.821889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.262605  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1202 23:53:26.280345  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.70506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.300837  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.625696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.300976  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.301003  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.301040  109541 httplog.go:90] GET /healthz: (1.856135ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:26.301150  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1202 23:53:26.312055  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.312096  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.312167  109541 httplog.go:90] GET /healthz: (1.45849ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.321044  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (2.650078ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.340730  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.525954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.341171  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1202 23:53:26.345145  109541 request.go:853] Got a Retry-After 1s response for attempt 7 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:26.360027  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.756549ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.390821  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.908237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.391121  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1202 23:53:26.401659  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.401696  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.401744  109541 httplog.go:90] GET /healthz: (2.977709ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:26.402912  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (4.693246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.412553  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.412593  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.412647  109541 httplog.go:90] GET /healthz: (1.984997ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.421191  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.726098ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.421511  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1202 23:53:26.439539  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.389612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.464762  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.456371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.465221  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1202 23:53:26.479965  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.709676ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.500770  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.500802  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.500841  109541 httplog.go:90] GET /healthz: (1.856307ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:26.501992  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.64219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.502429  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1202 23:53:26.517262  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.517289  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.517332  109541 httplog.go:90] GET /healthz: (2.740516ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.519439  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.350571ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.540534  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.382726ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.541066  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1202 23:53:26.559924  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.75181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.581477  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.940318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.581923  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1202 23:53:26.599577  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.419504ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.600798  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.600824  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.600875  109541 httplog.go:90] GET /healthz: (2.079727ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:26.619623  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.619667  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.619708  109541 httplog.go:90] GET /healthz: (9.041193ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.622017  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.366256ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.622282  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1202 23:53:26.639440  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.26833ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.661475  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.975231ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.661740  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1202 23:53:26.681474  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (3.211599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.700896  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.700925  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.700968  109541 httplog.go:90] GET /healthz: (2.226221ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:26.701438  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.229096ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.701666  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1202 23:53:26.711895  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.711930  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.711975  109541 httplog.go:90] GET /healthz: (1.346145ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.760879  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (42.686198ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.763682  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.179546ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.763956  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1202 23:53:26.765397  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.232084ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.780813  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.575212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.781148  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1202 23:53:26.799637  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.472523ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.800971  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.800998  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.801035  109541 httplog.go:90] GET /healthz: (2.294014ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:26.812018  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.812047  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.812105  109541 httplog.go:90] GET /healthz: (1.439615ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.821996  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.787285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.822417  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1202 23:53:26.839618  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.370205ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.864300  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (6.033228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.864736  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1202 23:53:26.880244  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.531929ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.902693  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.902737  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.902782  109541 httplog.go:90] GET /healthz: (3.714391ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:26.903114  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.896343ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.903361  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1202 23:53:26.952933  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:26.952969  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:26.953016  109541 httplog.go:90] GET /healthz: (42.285058ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:26.954697  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (36.43267ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.957490  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.25535ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:26.958955  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1202 23:53:26.960458  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.134542ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.002731  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (24.538385ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.003086  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1202 23:53:27.005030  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.005055  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.005106  109541 httplog.go:90] GET /healthz: (3.510942ms) 0 [Go-http-client/1.1 127.0.0.1:51210]
I1202 23:53:27.005410  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (2.131386ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.059641  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.059681  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.059729  109541 httplog.go:90] GET /healthz: (46.995367ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.060343  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (42.188912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:27.060733  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1202 23:53:27.115090  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (54.11471ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:27.115651  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.115671  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.115703  109541 httplog.go:90] GET /healthz: (4.65085ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.115783  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.115792  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.115824  109541 httplog.go:90] GET /healthz: (16.859509ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:27.118504  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.383577ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51210]
I1202 23:53:27.118854  109541 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1202 23:53:27.124742  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (5.662831ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.128254  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.872195ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.131225  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.446999ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.131469  109541 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1202 23:53:27.132741  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.042937ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.134533  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.328255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.140576  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.441969ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.140892  109541 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1202 23:53:27.160139  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.906245ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.163279  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.931379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.181142  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.947038ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.181402  109541 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1202 23:53:27.200652  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.200683  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.200721  109541 httplog.go:90] GET /healthz: (1.67338ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:27.200825  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.532185ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.202884  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.3222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.211875  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.211918  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.211964  109541 httplog.go:90] GET /healthz: (1.232274ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.221181  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.929167ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.221459  109541 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1202 23:53:27.239724  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.493336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.241694  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.428089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.261820  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.740436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.262414  109541 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1202 23:53:27.279705  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.473246ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.282028  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.490078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.301204  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.993649ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.301684  109541 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1202 23:53:27.302431  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.302460  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.302496  109541 httplog.go:90] GET /healthz: (2.90418ms) 0 [Go-http-client/1.1 127.0.0.1:51072]
I1202 23:53:27.311872  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.311908  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.311951  109541 httplog.go:90] GET /healthz: (1.282339ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.322514  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (4.148951ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.324716  109541 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.651363ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.341395  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (3.14901ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.342145  109541 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1202 23:53:27.345675  109541 request.go:853] Got a Retry-After 1s response for attempt 8 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:27.364689  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.452639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.366844  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.435261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.380740  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.510736ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.381094  109541 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1202 23:53:27.399782  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.501367ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.400679  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.400710  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.400745  109541 httplog.go:90] GET /healthz: (1.132991ms) 0 [Go-http-client/1.1 127.0.0.1:51596]
I1202 23:53:27.401844  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.60258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.412065  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.412098  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.412140  109541 httplog.go:90] GET /healthz: (1.494812ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.421696  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.755161ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.422392  109541 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1202 23:53:27.514469  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.514505  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.514551  109541 httplog.go:90] GET /healthz: (15.578185ms) 0 [Go-http-client/1.1 127.0.0.1:51596]
I1202 23:53:27.514702  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (76.464179ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.515459  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.515487  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.515522  109541 httplog.go:90] GET /healthz: (1.190659ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.517386  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.947403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51072]
I1202 23:53:27.520111  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.238923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.520382  109541 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1202 23:53:27.521805  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.180901ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.523453  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.115794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.525733  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.865327ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.526175  109541 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1202 23:53:27.534776  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (8.274329ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.538454  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.698426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.541142  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.095624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.541373  109541 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1202 23:53:27.559717  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.456504ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.561578  109541 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.380803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.581066  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.821907ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.581467  109541 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1202 23:53:27.599936  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.599975  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.600013  109541 httplog.go:90] GET /healthz: (1.277924ms) 0 [Go-http-client/1.1 127.0.0.1:51596]
I1202 23:53:27.600190  109541 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (2.031354ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.602176  109541 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.521735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.612238  109541 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1202 23:53:27.612266  109541 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1202 23:53:27.612328  109541 httplog.go:90] GET /healthz: (1.590755ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.626528  109541 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.196067ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.626929  109541 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1202 23:53:27.700425  109541 httplog.go:90] GET /healthz: (1.268285ms) 200 [Go-http-client/1.1 127.0.0.1:51650]
W1202 23:53:27.701368  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.701405  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.701447  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1202 23:53:27.701498  109541 factory.go:127] Creating scheduler from algorithm provider 'DefaultProvider'
I1202 23:53:27.701515  109541 factory.go:219] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
W1202 23:53:27.701588  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.701690  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.701851  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.701938  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.701952  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.702219  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.702238  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1202 23:53:27.702272  109541 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1202 23:53:27.702694  109541 reflector.go:153] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.702714  109541 reflector.go:188] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.703160  109541 reflector.go:153] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.703180  109541 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.703518  109541 reflector.go:153] Starting reflector *v1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.703542  109541 reflector.go:188] Listing and watching *v1.CSINode from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.703586  109541 reflector.go:153] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.703602  109541 reflector.go:188] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.703947  109541 reflector.go:153] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.703963  109541 reflector.go:188] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.704101  109541 reflector.go:153] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.704117  109541 reflector.go:188] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.704407  109541 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.704423  109541 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.704623  109541 reflector.go:153] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.704637  109541 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.704815  109541 reflector.go:153] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.704838  109541 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.705256  109541 reflector.go:153] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.705270  109541 reflector.go:188] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.705632  109541 reflector.go:153] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.705642  109541 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I1202 23:53:27.705926  109541 httplog.go:90] GET /apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0: (475.346µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.706215  109541 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (567.883µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51686]
I1202 23:53:27.706312  109541 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (369.82µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:53:27.706651  109541 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (328.431µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51688]
I1202 23:53:27.706811  109541 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (282.43µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51596]
I1202 23:53:27.706948  109541 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (254.459µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51700]
I1202 23:53:27.707134  109541 get.go:251] Starting watch for /api/v1/nodes, rv=30797 labels= fields= timeout=6m51s
I1202 23:53:27.707454  109541 get.go:251] Starting watch for /apis/storage.k8s.io/v1/csinodes, rv=30807 labels= fields= timeout=5m34s
I1202 23:53:27.707481  109541 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=30807 labels= fields= timeout=5m41s
I1202 23:53:27.707542  109541 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (241.325µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51698]
I1202 23:53:27.707831  109541 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (233.933µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51690]
I1202 23:53:27.707832  109541 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30804 labels= fields= timeout=9m51s
I1202 23:53:27.708046  109541 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (1.458196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51702]
I1202 23:53:27.708494  109541 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=30807 labels= fields= timeout=9m15s
I1202 23:53:27.708535  109541 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30806 labels= fields= timeout=7m28s
I1202 23:53:27.708924  109541 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=30798 labels= fields= timeout=6m0s
I1202 23:53:27.708955  109541 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=30795 labels= fields= timeout=8m8s
I1202 23:53:27.709094  109541 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (434.719µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51696]
I1202 23:53:27.709623  109541 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (1.994302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51694]
I1202 23:53:27.709625  109541 get.go:251] Starting watch for /api/v1/services, rv=30798 labels= fields= timeout=9m45s
I1202 23:53:27.709884  109541 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=30795 labels= fields= timeout=7m24s
I1202 23:53:27.710515  109541 get.go:251] Starting watch for /api/v1/pods, rv=30797 labels= fields= timeout=9m56s
I1202 23:53:27.712376  109541 httplog.go:90] GET /healthz: (1.184719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.713889  109541 httplog.go:90] GET /api/v1/namespaces/default: (1.145843ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.716050  109541 httplog.go:90] POST /api/v1/namespaces: (1.744845ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.802623  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802672  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802680  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802686  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802695  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802701  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802707  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802712  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802719  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802729  109541 shared_informer.go:227] caches populated
I1202 23:53:27.802735  109541 shared_informer.go:227] caches populated
I1202 23:53:27.803342  109541 shared_informer.go:227] caches populated
I1202 23:53:27.824269  109541 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (107.487546ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.824933  109541 httplog.go:90] POST /api/v1/nodes: (21.375197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:27.825301  109541 node_tree.go:86] Added node "node1" in group "" to NodeTree
I1202 23:53:27.828398  109541 httplog.go:90] PATCH /api/v1/nodes/node1: (2.959019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:27.836592  109541 httplog.go:90] POST /api/v1/namespaces/default/services: (11.576073ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.839002  109541 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.323076ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.843559  109541 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (3.575325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.931057  109541 httplog.go:90] GET /api/v1/nodes/node1: (1.814379ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.934805  109541 httplog.go:90] POST /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods: (3.062371ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.935105  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:27.935222  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:27.935507  109541 scheduler_binder.go:278] AssumePodVolumes for pod "preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod", node "node1"
I1202 23:53:27.935614  109541 scheduler_binder.go:288] AssumePodVolumes for pod "preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod", node "node1": all PVCs bound and nothing to do
I1202 23:53:27.935814  109541 factory.go:519] Attempting to bind victim-pod to node1
I1202 23:53:27.938761  109541 httplog.go:90] POST /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod/binding: (2.454328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:27.939297  109541 scheduler.go:751] pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible.
I1202 23:53:27.942491  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (2.623356ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:28.122055  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (86.052222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:28.128660  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.916217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:28.131771  109541 httplog.go:90] POST /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods: (2.410718ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:28.132428  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:28.132448  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:28.132570  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu.; waiting
I1202 23:53:28.132611  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:28.136052  109541 httplog.go:90] PUT /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod/status: (2.8705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:28.136806  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (3.2078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51764]
I1202 23:53:28.139999  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (3.427522ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:28.140254  109541 generic_scheduler.go:1211] Node node1 is a potential node for preemption.
I1202 23:53:28.141631  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (8.737556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
E1202 23:53:28.142132  109541 factory.go:494] pod is already present in the activeQ
I1202 23:53:28.143147  109541 httplog.go:90] PUT /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod/status: (2.506816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51708]
I1202 23:53:28.147055  109541 httplog.go:90] DELETE /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.438571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:28.147375  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:28.147392  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:28.147518  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu.; waiting
I1202 23:53:28.147551  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:28.150041  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (1.899987ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51768]
I1202 23:53:28.150202  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (2.58984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:28.151814  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (3.874638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51764]
I1202 23:53:28.152609  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (4.161087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51770]
I1202 23:53:28.346271  109541 request.go:853] Got a Retry-After 1s response for attempt 9 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:28.706846  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:28.707045  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:28.707573  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:28.708445  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:28.709420  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:28.709607  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:28.710448  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:28.710587  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:28.710608  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:28.711078  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod: no fit: 0/1 nodes are available: 1 Insufficient cpu.; waiting
I1202 23:53:28.711137  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:28.796190  109541 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events/preemptor-pod.15dcb306075d0973: (83.419601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51840]
I1202 23:53:28.796690  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (85.223411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51768]
I1202 23:53:28.796690  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (85.243573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.134509  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.671261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.239043  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (3.723669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.249770  109541 httplog.go:90] DELETE /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (10.154837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.250210  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:29.250229  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:29.250363  109541 scheduler_binder.go:278] AssumePodVolumes for pod "preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod", node "node1"
I1202 23:53:29.250377  109541 scheduler_binder.go:288] AssumePodVolumes for pod "preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod", node "node1": all PVCs bound and nothing to do
I1202 23:53:29.250448  109541 factory.go:519] Attempting to bind preemptor-pod to node1
I1202 23:53:29.252405  109541 httplog.go:90] POST /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod/binding: (1.703624ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51840]
I1202 23:53:29.252636  109541 scheduler.go:751] pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible.
I1202 23:53:29.252992  109541 store.go:365] GuaranteedUpdate of /ca6c54ff-0245-4d5f-a410-1ec043f48099/pods/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod failed because of a conflict, going to retry
I1202 23:53:29.256527  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (3.361078ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51840]
I1202 23:53:29.259537  109541 httplog.go:90] DELETE /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (9.345407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.262589  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.221491ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.265769  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (1.451167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.268199  109541 httplog.go:90] POST /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods: (1.903318ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.268558  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:29.268576  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:29.268715  109541 scheduler_binder.go:278] AssumePodVolumes for pod "preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod", node "node1"
I1202 23:53:29.268730  109541 scheduler_binder.go:288] AssumePodVolumes for pod "preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod", node "node1": all PVCs bound and nothing to do
I1202 23:53:29.268797  109541 factory.go:519] Attempting to bind victim-pod to node1
I1202 23:53:29.271136  109541 httplog.go:90] POST /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod/binding: (1.958428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.271334  109541 scheduler.go:751] pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod is bound successfully on node "node1", 1 nodes evaluated, 1 nodes were found feasible.
I1202 23:53:29.275281  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (3.645696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
E1202 23:53:29.346907  109541 factory.go:503] Error getting pod permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/test-pod for retry: an error on the server ("") has prevented the request from succeeding (get pods test-pod); retrying...
I1202 23:53:29.371073  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.230044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.373638  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.008123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.376645  109541 httplog.go:90] POST /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods: (2.41894ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.376972  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:29.376989  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:29.377118  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod: no fit: 0/1 nodes are available: 1 can't fit preemptor-pod.; waiting
I1202 23:53:29.377168  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:29.385166  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (6.925515ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I1202 23:53:29.385406  109541 httplog.go:90] PUT /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod/status: (7.96597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.386794  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (9.111444ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51840]
E1202 23:53:29.387282  109541 factory.go:494] pod is already present in the activeQ
I1202 23:53:29.387438  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (1.48138ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51722]
I1202 23:53:29.387707  109541 generic_scheduler.go:1211] Node node1 is a potential node for preemption.
I1202 23:53:29.390816  109541 httplog.go:90] PUT /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod/status: (2.277293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51840]
I1202 23:53:29.394391  109541 httplog.go:90] DELETE /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.83956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51840]
I1202 23:53:29.394696  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:29.394721  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:29.394837  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod: no fit: 0/1 nodes are available: 1 can't fit preemptor-pod.; waiting
I1202 23:53:29.394888  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:29.397737  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (2.207684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I1202 23:53:29.399411  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (3.881643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I1202 23:53:29.399584  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (4.014831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51840]
I1202 23:53:29.399901  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (3.411353ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51924]
I1202 23:53:29.399958  109541 generic_scheduler.go:1211] Node node1 is a potential node for preemption.
I1202 23:53:29.402522  109541 httplog.go:90] PUT /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod/status: (2.184326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I1202 23:53:29.404551  109541 httplog.go:90] DELETE /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.564021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I1202 23:53:29.407112  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (2.041968ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I1202 23:53:29.707699  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:29.707899  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:29.708142  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:29.708565  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:29.709563  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:29.709756  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:29.710581  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:29.710772  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:29.710852  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:29.711025  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod: no fit: 0/1 nodes are available: 1 can't fit preemptor-pod.; waiting
I1202 23:53:29.711089  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:29.849556  109541 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events/preemptor-pod.15dcb30651b5db72: (136.239525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:29.892693  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (180.530627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I1202 23:53:29.892717  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (181.302939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51918]
I1202 23:53:30.379292  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.903605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I1202 23:53:30.482090  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (1.96319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I1202 23:53:30.488632  109541 httplog.go:90] DELETE /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (5.939768ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I1202 23:53:30.488923  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:30.488951  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:30.489101  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod: no fit: 0/1 nodes are available: 1 can't fit preemptor-pod.; waiting
I1202 23:53:30.489151  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:30.491757  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (2.030818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:30.492173  109541 generic_scheduler.go:1211] Node node1 is a potential node for preemption.
I1202 23:53:30.494704  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:30.494750  109541 scheduler.go:601] Skip schedule deleting pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:30.497443  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (6.091887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52236]
I1202 23:53:30.498115  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (2.776557ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:30.500464  109541 httplog.go:90] DELETE /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (11.243673ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51922]
I1202 23:53:30.504310  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.600444ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52236]
I1202 23:53:30.508000  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/preemptor-pod: (1.940616ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52236]
I1202 23:53:30.522823  109541 httplog.go:90] POST /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods: (14.204675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52236]
I1202 23:53:30.525178  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:30.525211  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:30.525342  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:30.525391  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:30.549819  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (23.945208ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:30.551275  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (24.893871ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:30.570746  109541 httplog.go:90] PUT /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod/status: (44.013849ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52236]
I1202 23:53:30.573318  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.792382ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:30.573726  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:30.634805  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (11.187157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:30.708270  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:30.708562  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:30.708583  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:30.708660  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:30.709831  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:30.709977  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:30.710801  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:30.710920  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:30.710933  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:30.711098  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:30.711140  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:30.714498  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.404658ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:30.714982  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.972343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:30.715411  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:30.717443  109541 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events: (5.307491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52302]
I1202 23:53:30.725597  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.961714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:30.825775  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.046478ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:30.935129  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (11.414724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:30.947477  109541 request.go:853] Got a Retry-After 1s response for attempt 1 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:31.026645  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.787102ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.129122  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.816004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.226284  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.182617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.335771  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (12.060409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.429707  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (5.73989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.532202  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (8.529146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.627170  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.368998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.703198  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:31.703258  109541 scheduler.go:601] Skip schedule deleting pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/preemptor-pod
I1202 23:53:31.707706  109541 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events/preemptor-pod.15dcb306934465de: (3.581173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.708688  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:31.708823  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:31.708904  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:31.708919  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:31.710953  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:31.725658  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.901892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.747815  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:31.747924  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:31.747968  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:31.747981  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:31.748130  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:31.748172  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:31.756192  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (6.383398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:31.756356  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (6.027938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.756627  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:31.759883  109541 httplog.go:90] PATCH /apis/events.k8s.io/v1beta1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/events/victim-pod.15dcb306a02a40b6: (10.508963ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52596]
I1202 23:53:31.827289  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.605346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.926885  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.572114ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:31.949317  109541 request.go:853] Got a Retry-After 1s response for attempt 2 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:32.025409  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.720631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.131738  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (7.981882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.227061  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.22307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.325382  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.718204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.425626  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.837813ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.525316  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.632617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.631757  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (8.079857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.708950  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:32.708957  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:32.709200  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:32.709225  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:32.711238  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:32.725611  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.88133ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.748007  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:32.748308  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:32.825647  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.965611ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.925339  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.683991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:32.949884  109541 request.go:853] Got a Retry-After 1s response for attempt 3 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:33.025913  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.233717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.125262  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.563831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.226257  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.471093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.325662  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.979332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.438013  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (14.227637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.529504  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (5.333687ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.626182  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.519533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.709115  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:33.709162  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:33.709305  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:33.709320  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:33.711974  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:33.712103  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:33.712116  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:33.712253  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:33.712291  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:33.714726  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.893131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:33.715193  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.637607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.715503  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:33.727009  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.98535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.748210  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:33.748496  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:33.825396  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.722665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.927467  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.797433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:33.950383  109541 request.go:853] Got a Retry-After 1s response for attempt 4 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:34.026325  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.641842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.128745  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (5.034699ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.226464  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.721754ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.326424  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.330226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.427053  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.390217ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.526688  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.99952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.627331  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.564279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
E1202 23:53:34.628125  109541 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:39937/apis/events.k8s.io/v1beta1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/events: dial tcp 127.0.0.1:39937: connect: connection refused' (may retry after sleeping)
I1202 23:53:34.705918  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:34.705953  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:34.706143  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:34.706185  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:34.708143  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.590259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:34.709133  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.6637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.709258  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:34.709283  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:34.709363  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:34.709387  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:34.709513  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:34.712282  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:34.728223  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.936509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.748423  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:34.748641  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:34.825649  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.928857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.927503  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.82092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:34.951003  109541 request.go:853] Got a Retry-After 1s response for attempt 5 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:35.025936  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.079116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.127698  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.007008ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.225674  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.010287ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.326351  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.594686ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.425662  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.914663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.527491  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.710808ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.627901  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.225694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.709423  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:35.709499  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:35.709540  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:35.709549  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:35.712468  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:35.712592  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:35.712604  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:35.712742  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:35.712784  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:35.719924  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (6.686447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:35.719926  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (6.163412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.720327  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:35.725590  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.926763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.748838  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:35.749520  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:35.827375  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.659347ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.925730  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.011325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:35.951850  109541 request.go:853] Got a Retry-After 1s response for attempt 6 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:36.025684  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.966851ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:36.126154  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.489491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:36.225684  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.937088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:36.325789  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.124879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:36.425271  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.602998ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:36.525671  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.643024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:36.625971  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.239855ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:36.709709  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:36.709772  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:36.710247  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:36.710258  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:36.714494  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:36.714630  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:36.714642  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:36.714792  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:36.714834  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:36.718649  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.3817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:36.719036  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:36.719583  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.339763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:36.726424  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.626096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:36.749304  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:36.749717  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:36.831136  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (7.445442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:36.926119  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.331279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:36.952362  109541 request.go:853] Got a Retry-After 1s response for attempt 7 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:37.026377  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.656376ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.126189  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.435621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.225688  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.029426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.325596  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.858546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.425480  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.723789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.527038  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.294119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.626081  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.293565ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.710127  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:37.710192  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:37.710394  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:37.710690  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:37.714617  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:37.714741  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:37.714754  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:37.714920  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:37.714963  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:37.715122  109541 httplog.go:90] GET /api/v1/namespaces/default: (2.13787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.717774  109541 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.571225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54634]
I1202 23:53:37.717835  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.133156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.717841  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.370057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52242]
I1202 23:53:37.718121  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:37.719478  109541 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.324661ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54634]
I1202 23:53:37.725533  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.85848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.749877  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:37.749886  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:37.826575  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.859706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.926294  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.581085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:37.953094  109541 request.go:853] Got a Retry-After 1s response for attempt 8 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:38.027146  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.468859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.126350  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.363072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.225485  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.843407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.325841  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.832533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.425906  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.191025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.525683  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.965934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.626130  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.444058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.710220  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:38.710374  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:38.710387  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:38.710554  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:38.710620  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:38.710835  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:38.711167  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:38.711250  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:38.713768  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.755131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.714105  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:38.714415  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.976117ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:38.714834  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:38.714949  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:38.714961  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:38.715091  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:38.715130  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:38.717038  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.534252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.717038  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.680951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:38.717361  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:38.726161  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.345959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.750113  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:38.750256  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:38.825821  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.988638ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.925247  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.496349ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:38.953580  109541 request.go:853] Got a Retry-After 1s response for attempt 9 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:39.026840  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.371348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.125979  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.236693ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.225581  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.971799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.326466  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.696471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.425559  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.859025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.525894  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.027872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.625809  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.103786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.710647  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:39.711013  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:39.711983  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:39.712023  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:39.714986  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:39.715093  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:39.715106  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:39.715277  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:39.715318  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:39.717512  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.920604ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.717537  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.876948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:39.717843  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:39.725931  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.26157ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.750334  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:39.750391  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:39.825953  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.297944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:39.925576  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.881786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
E1202 23:53:39.954215  109541 factory.go:503] Error getting pod permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/test-pod for retry: an error on the server ("") has prevented the request from succeeding (get pods test-pod); retrying...
I1202 23:53:40.025800  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.10042ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.125785  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.111118ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.225338  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.730377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.325683  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.97337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.425507  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.808586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.525568  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.871916ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.626515  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.729906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.711036  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:40.712071  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:40.712121  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:40.712204  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:40.726430  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.712469ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.728070  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:40.728169  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:40.728191  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:40.728336  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:40.728378  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:40.731547  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.707979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:40.731816  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.882099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.732016  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:40.750473  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:40.751645  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:40.825680  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.979085ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:40.934977  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (10.541989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.026482  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.777276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.127138  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.028413ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.225495  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.761321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.328874  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (5.194527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.427150  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.523729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.548557  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (24.871516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.626541  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.833193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.711216  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:41.712201  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:41.712301  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:41.712402  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:41.727033  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.36999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.728229  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:41.728351  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:41.728367  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:41.728508  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:41.728548  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:41.731211  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.827015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:41.731410  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.70951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:41.731516  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:41.750723  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:41.752088  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:41.830018  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (6.302213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:41.926783  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.659278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.025433  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.715421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.125918  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.284387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.225478  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.792529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.325710  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.030181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.426453  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.741608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.527137  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.394539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.625491  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.793723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.712195  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:42.712405  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:42.712522  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:42.712536  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:42.726377  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.822795ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.728405  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:42.728552  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:42.728567  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:42.728722  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:42.728775  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:42.731204  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.76818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:42.731204  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.530256ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:42.731495  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:42.750957  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:42.752282  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:42.826209  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.542977ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:42.928188  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.836019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.025620  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.801398ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.125322  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.671696ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.155143  109541 request.go:853] Got a Retry-After 1s response for attempt 1 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:43.226217  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.561516ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.328029  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.281923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.428893  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (5.038729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.540830  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (16.970844ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.625652  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.951974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.712375  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:43.712545  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:43.712644  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:43.712741  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:43.725895  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.021447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.728642  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:43.728785  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:43.728801  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:43.728974  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:43.729023  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:43.732095  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.769957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.732153  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.681885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:43.732425  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:43.751177  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:43.752467  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:43.826027  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.370078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:43.925497  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.74867ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.026648  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.900942ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.125396  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.670806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.155680  109541 request.go:853] Got a Retry-After 1s response for attempt 2 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:44.227819  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.104233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.325236  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.581402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.425335  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.673472ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.525905  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.472282ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.636484  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (12.22794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.712531  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:44.712672  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:44.712804  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:44.712806  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:44.726077  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.320787ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.728831  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:44.728969  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:44.728988  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:44.729137  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:44.729183  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:44.731362  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.809303ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:44.731363  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.913947ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.731914  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:44.751402  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:44.752674  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:44.826514  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.845086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:44.926596  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.941759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.025599  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.839228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.126327  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.67626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.156230  109541 request.go:853] Got a Retry-After 1s response for attempt 3 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:45.227302  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.636268ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.325678  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.996231ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.426131  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.15238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.527832  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.150131ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.626433  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.733737ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.712724  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:45.712879  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:45.713019  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:45.713046  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:45.726957  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.196475ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.730116  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:45.730261  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:45.730279  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:45.730433  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:45.730494  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:45.733258  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.145784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.733578  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.520525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:45.733967  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:45.751604  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:45.752886  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:45.828687  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.937211ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:45.925180  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.568299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.025701  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.98388ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.125784  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.089732ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.158300  109541 request.go:853] Got a Retry-After 1s response for attempt 4 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:46.225408  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.706705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.326300  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.62463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
E1202 23:53:46.415298  109541 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:39937/apis/events.k8s.io/v1beta1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/events: dial tcp 127.0.0.1:39937: connect: connection refused' (may retry after sleeping)
I1202 23:53:46.425560  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.851135ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.526175  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.47846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.625398  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.702167ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.712912  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:46.713107  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:46.713338  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:46.713371  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:46.725573  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.96255ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.730310  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:46.730462  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:46.730475  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:46.730624  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:46.730665  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:46.733016  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.646578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:46.733494  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.583446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.733897  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:46.751849  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:46.753175  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:46.829302  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.262289ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:46.926224  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.54262ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.025828  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.152035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.125366  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.684168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.160075  109541 request.go:853] Got a Retry-After 1s response for attempt 5 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:47.225670  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.976952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.325338  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.667132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.425766  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.085788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.526042  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.229351ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.625909  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.265533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.713147  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:47.713297  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:47.713463  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:47.713484  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:47.715963  109541 httplog.go:90] GET /api/v1/namespaces/default: (2.77591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.718338  109541 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.823783ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.720929  109541 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.468959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.727284  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.725319ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.730507  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:47.731040  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:47.731068  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:47.731317  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:47.731400  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:47.733431  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.572199ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:47.733820  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:47.734342  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.640492ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.752463  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:47.753360  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:47.825778  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.063846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:47.926966  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.252015ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.025558  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.887773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.126005  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.280965ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.160650  109541 request.go:853] Got a Retry-After 1s response for attempt 6 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:48.226199  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.233182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.325790  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.986143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.425917  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.255357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.526002  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.19247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.626015  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.320283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.713583  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:48.713869  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:48.713892  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:48.713921  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:48.725457  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.7842ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.730883  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:48.731034  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:48.731116  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:48.731333  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:48.731410  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:48.734889  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.199065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:48.734904  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.109774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.735394  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:48.752676  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:48.753565  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:48.826077  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.378756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:48.925364  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.621257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.026266  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.544073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.125998  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.284976ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.161548  109541 request.go:853] Got a Retry-After 1s response for attempt 7 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:49.225964  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.314669ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.330419  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.088458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.428478  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.775232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.531320  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.0749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.630726  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (6.893017ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.713781  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:49.714138  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:49.714163  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:49.714176  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:49.731065  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:49.731187  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:49.731199  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:49.731335  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:49.731378  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:49.741392  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (8.781411ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:49.741795  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:49.742210  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (7.429425ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:57792]
I1202 23:53:49.742557  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (10.524988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:49.752921  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:49.753725  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:49.825579  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.912719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:49.925528  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.830883ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.025400  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.71939ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.125393  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.706377ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.162133  109541 request.go:853] Got a Retry-After 1s response for attempt 8 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:50.226163  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.99857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.325619  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.968052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.440846  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (16.315047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.525693  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.988201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.625732  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.076258ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.714052  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:50.714294  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:50.714299  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:50.714321  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:50.729677  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.012948ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.731295  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:50.731425  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:50.731443  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:50.731553  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:50.731591  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:50.734156  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.189922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:50.735201  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.879002ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.735496  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:50.753151  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:50.754212  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:50.832192  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.523461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:50.925983  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.253961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.025660  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.98214ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.126537  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.0639ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.162666  109541 request.go:853] Got a Retry-After 1s response for attempt 9 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:51.275133  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (20.280748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.325852  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.194166ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.425280  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.622769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.527936  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.270548ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.627783  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.092717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.714483  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:51.714546  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:51.714798  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:51.714934  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:51.726735  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.903747ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.731474  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:51.731601  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:51.731618  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:51.731762  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:51.731810  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:51.734027  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.575742ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.734027  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.818539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:51.734516  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:51.753369  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:51.754399  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:51.827761  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.047935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:51.925964  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.739576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.025820  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.101633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.126059  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.360143ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
E1202 23:53:52.163932  109541 factory.go:503] Error getting pod permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/test-pod for retry: an error on the server ("") has prevented the request from succeeding (get pods test-pod); retrying...
I1202 23:53:52.225546  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.873902ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.326787  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.752187ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.428751  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.913577ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.526685  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.730719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.626094  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.1141ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.714670  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:52.714717  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:52.714903  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:52.715075  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:52.726297  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.036149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.731650  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:52.731797  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:52.731810  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:52.731980  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:52.732033  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:52.735647  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.659074ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:52.735646  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.898908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:52.735976  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:52.753847  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:52.754618  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:52.828223  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.482495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:52.925362  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.665246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.026649  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.915857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.126847  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.167454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.226008  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.645447ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.327889  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.205564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.425561  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.828957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.525843  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.151533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.627237  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.201263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.714947  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:53.715023  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:53.715070  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:53.715234  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:53.726418  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.50216ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.731899  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:53.732074  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:53.732096  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:53.732319  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:53.732393  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:53.736843  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.699357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:53.738391  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (5.22191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.739424  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:53.754138  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:53.754809  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:53.827460  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.24339ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:53.926731  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.761556ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.026557  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.512957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.131731  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (6.191983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.226813  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.758041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.327479  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.596082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.426921  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.988876ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.528623  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.473026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.626043  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.831612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.715105  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:54.715201  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:54.715257  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:54.715573  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:54.726329  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.651343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.732139  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:54.732263  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:54.732274  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:54.732420  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:54.732458  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:54.739297  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (6.213812ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:54.740993  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (7.377359ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.741308  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:54.754422  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:54.755057  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:54.825666  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.99228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:54.927742  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.514597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.027773  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (4.077445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.125507  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.78835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.225616  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.927712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.325762  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.066307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.426343  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.918273ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.526128  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.39872ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.625790  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.064839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.715829  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:55.715841  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:55.715903  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:55.715915  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:55.725778  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.032372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.732358  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:55.732502  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:55.732516  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:55.732711  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:55.732759  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:55.735337  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.129058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:55.735337  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.894391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:55.735688  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:55.754691  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:55.755234  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:55.825687  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.019993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:55.927564  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.821674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.025356  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.730825ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.127419  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.690601ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.225699  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.776136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.325516  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.703709ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.425448  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.75078ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.525158  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.506702ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.625343  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.677863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.716094  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:56.716089  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:56.716140  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:56.716144  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:56.725340  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.644163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.732474  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:56.732623  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:56.732643  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:56.732793  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:56.732850  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:56.735295  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.073105ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.735295  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.092463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:56.735712  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:56.754950  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:56.755449  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:56.825731  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.036261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:56.926147  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.41966ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.025815  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.077818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.125678  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.968443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.225984  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.248836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.325689  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.900697ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.427262  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.499123ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.525662  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.926436ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.625568  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.857831ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.715764  109541 httplog.go:90] GET /api/v1/namespaces/default: (2.288574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.716242  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:57.716250  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:57.716279  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:57.716282  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:57.719334  109541 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (2.794609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.721715  109541 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.816837ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.726398  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.722897ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.732668  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:57.733162  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:57.733311  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:57.733583  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:57.733731  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:57.735930  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.626062ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:57.735991  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.909084ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.736526  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:57.755178  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:57.755670  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:57.825164  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.511722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:57.925384  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.610307ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.025421  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.786641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.125365  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.640596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.227162  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.468773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.325678  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.916801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.425946  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.230537ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.526814  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (3.105165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.564562  109541 request.go:853] Got a Retry-After 1s response for attempt 1 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:58.625809  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.068885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.716440  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:58.716470  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:58.716470  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:58.716450  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:58.726344  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.575202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.733207  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:58.733346  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:58.733358  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:58.733497  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:58.733538  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
E1202 23:53:58.733771  109541 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:39937/apis/events.k8s.io/v1beta1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/events: dial tcp 127.0.0.1:39937: connect: connection refused' (may retry after sleeping)
I1202 23:53:58.735728  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.899911ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:58.735747  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.984294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.736130  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:58.755425  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:58.755918  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:58.827226  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.959685ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:58.925994  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.124407ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.025729  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.963853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.125974  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.998806ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.226159  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.385243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.325739  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.958668ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.425936  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.883873ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.525801  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.044666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.565132  109541 request.go:853] Got a Retry-After 1s response for attempt 2 to http://127.0.0.1:39937/api/v1/namespaces/permit-pluginsc87fc853-90b6-419e-83aa-6d9d22d229ea/pods/test-pod
I1202 23:53:59.626068  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.219318ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.716610  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:59.716792  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:59.716840  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:59.724442  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:59.725780  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.076641ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.733423  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:59.733560  109541 scheduling_queue.go:841] About to try and schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:59.733575  109541 scheduler.go:605] Attempting to schedule pod: preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod
I1202 23:53:59.733731  109541 factory.go:453] Unable to schedule preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod: no fit: 0/1 nodes are available: 1 can't fit victim-pod.; waiting
I1202 23:53:59.733779  109541 scheduler.go:769] Updating pod condition for preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod to (PodScheduled==False, Reason=Unschedulable)
I1202 23:53:59.736062  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.336222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:59.736107  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.050038ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:54632]
I1202 23:53:59.736438  109541 generic_scheduler.go:344] Preemption will not help schedule pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod on any node.
I1202 23:53:59.755650  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:59.756415  109541 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1202 23:53:59.825872  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.080389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:53:59.925617  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.923235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.026485  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.72437ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.126257  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.523848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.226065  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.377419ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.325468  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.787573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.426261  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.184435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.525913  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (2.153958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.529522  109541 httplog.go:90] GET /api/v1/namespaces/preemptiom2f805dc8-6071-41be-ad59-cca287d38585/pods/victim-pod: (1.442464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
E1202 23:54:00.530490  109541 scheduling_queue.go:844] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I1202 23:54:00.530621  109541 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30807&timeout=9m15s&timeoutSeconds=555&watch=true: (32.822270001s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51690]
I1202 23:54:00.530538  109541 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30798&timeout=6m0s&timeoutSeconds=360&watch=true: (32.821784876s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51686]
I1202 23:54:00.530746  109541 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30795&timeout=8m8s&timeoutSeconds=488&watch=true: (32.822599269s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51688]
I1202 23:54:00.530830  109541 httplog.go:90] GET /apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=30807&timeout=5m34s&timeoutSeconds=334&watch=true: (32.823479881s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51692]
I1202 23:54:00.530951  109541 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30795&timeout=7m24s&timeoutSeconds=444&watch=true: (32.82136288s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51706]
I1202 23:54:00.530776  109541 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30807&timeout=5m41s&timeoutSeconds=341&watch=true: (32.823434051s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51700]
I1202 23:54:00.530989  109541 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30797&timeout=6m51s&timeoutSeconds=411&watch=true: (32.824057358s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51650]
I1202 23:54:00.531013  109541 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30798&timeout=9m45s&timeoutSeconds=585&watch=true: (32.821567689s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51696]
I1202 23:54:00.531028  109541 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=30797&timeout=9m56s&timeoutSeconds=596&watch=true: (32.820676729s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51694]
I1202 23:54:00.531050  109541 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30804&timeout=9m51s&timeoutSeconds=591&watch=true: (32.82333694s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51704]
I1202 23:54:00.531052  109541 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30806&timeout=7m28s&timeoutSeconds=448&watch=true: (32.822627273s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51698]
I1202 23:54:00.535974  109541 httplog.go:90] DELETE /api/v1/nodes: (5.123156ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.536236  109541 controller.go:180] Shutting down kubernetes service endpoint reconciler
I1202 23:54:00.546792  109541 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (10.216579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.549995  109541 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.384215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:52000]
I1202 23:54:00.550434  109541 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I1202 23:54:00.550564  109541 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=30796&timeout=9m51s&timeoutSeconds=591&watch=true: (36.151155723s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:51070]
--- FAIL: TestPreemption (36.37s)
    preemption_test.go:402: Test [basic pod preemption with unresolvable filter]: Error running pause pod: Pod preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod didn't schedule successfully. Error: timed out waiting for the condition

				from junit_304dbea7698c16157bb4586f231ea1f94495b046_20191202-234534.xml

Find preemptiom2f805dc8-6071-41be-ad59-cca287d38585/victim-pod mentions in log files | View test history on testgrid


Show 2899 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 56 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [1202 23:35:02] Call tree:
!!! [1202 23:35:02]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [1202 23:35:02]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [1202 23:35:02]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [1202 23:35:02]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [1202 23:35:02]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [1202 23:35:02] Running kubeadm tests
+++ [1202 23:35:07] Building go targets for linux/amd64:
    cmd/kubeadm
Running tests for APIVersion: v1,admissionregistration.k8s.io/v1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,discovery.k8s.io/v1alpha1,discovery.k8s.io/v1beta1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,networking.k8s.io/v1beta1,node.k8s.io/v1alpha1,node.k8s.io/v1beta1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,scheduling.k8s.io/v1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,flowcontrol.apiserver.k8s.io/v1alpha1,
+++ [1202 23:35:59] Running tests without code coverage
{"Time":"2019-12-02T23:37:24.957312563Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t40.585s\n"}
... skipping 303 lines ...
+++ [1202 23:39:23] Building kube-controller-manager
+++ [1202 23:39:29] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [1202 23:40:01] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I1202 23:40:01.958793   54469 serving.go:312] Generated self-signed cert in-memory
W1202 23:40:02.502757   54469 authentication.go:409] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1202 23:40:02.502815   54469 authentication.go:267] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W1202 23:40:02.502823   54469 authentication.go:291] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W1202 23:40:02.502839   54469 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1202 23:40:02.502884   54469 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I1202 23:40:02.502912   54469 controllermanager.go:161] Version: v1.18.0-alpha.0.1346+d34a61e65b495b
I1202 23:40:02.504094   54469 secure_serving.go:178] Serving securely on [::]:10257
I1202 23:40:02.504508   54469 tlsconfig.go:219] Starting DynamicServingCertificateController
I1202 23:40:02.504512   54469 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I1202 23:40:02.504828   54469 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 65 lines ...
I1202 23:40:03.297820   54469 graph_builder.go:282] GraphBuilder running
I1202 23:40:03.298098   54469 controllermanager.go:533] Started "replicaset"
W1202 23:40:03.298110   54469 controllermanager.go:512] "bootstrapsigner" is disabled
I1202 23:40:03.298303   54469 node_lifecycle_controller.go:77] Sending events to api server
I1202 23:40:03.298316   54469 replica_set.go:180] Starting replicaset controller
I1202 23:40:03.298327   54469 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
E1202 23:40:03.298333   54469 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
W1202 23:40:03.298343   54469 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I1202 23:40:03.298720   54469 controllermanager.go:533] Started "deployment"
W1202 23:40:03.298972   54469 controllermanager.go:525] Skipping "ttl-after-finished"
I1202 23:40:03.298765   54469 deployment_controller.go:152] Starting deployment controller
I1202 23:40:03.299410   54469 shared_informer.go:197] Waiting for caches to sync for deployment
I1202 23:40:03.299527   54469 controllermanager.go:533] Started "endpoint"
... skipping 18 lines ...
I1202 23:40:03.302895   54469 node_lifecycle_controller.go:423] Controller is using taint based evictions.
I1202 23:40:03.303319   54469 taint_manager.go:162] Sending events to api server.
I1202 23:40:03.303604   54469 node_lifecycle_controller.go:520] Controller will reconcile labels.
I1202 23:40:03.303841   54469 controllermanager.go:533] Started "nodelifecycle"
I1202 23:40:03.303891   54469 node_lifecycle_controller.go:554] Starting node controller
I1202 23:40:03.304202   54469 shared_informer.go:197] Waiting for caches to sync for taint
E1202 23:40:03.304545   54469 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1202 23:40:03.304587   54469 controllermanager.go:525] Skipping "service"
I1202 23:40:03.305206   54469 controllermanager.go:533] Started "pv-protection"
W1202 23:40:03.305253   54469 controllermanager.go:525] Skipping "root-ca-cert-publisher"
I1202 23:40:03.305396   54469 pv_protection_controller.go:81] Starting PV protection controller
I1202 23:40:03.305406   54469 shared_informer.go:197] Waiting for caches to sync for PV protection
I1202 23:40:03.305804   54469 controllermanager.go:533] Started "statefulset"
... skipping 83 lines ...
}I1202 23:40:03.685932   54469 shared_informer.go:204] Caches are synced for expand 
I1202 23:40:03.700231   54469 shared_informer.go:204] Caches are synced for job 
I1202 23:40:03.700288   54469 shared_informer.go:204] Caches are synced for endpoint 
I1202 23:40:03.701561   54469 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I1202 23:40:03.705604   54469 shared_informer.go:204] Caches are synced for PV protection 
I1202 23:40:03.707167   54469 shared_informer.go:204] Caches are synced for stateful set 
E1202 23:40:03.711137   54469 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E1202 23:40:03.714582   54469 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I1202 23:40:03.719713   54469 shared_informer.go:204] Caches are synced for namespace 
I1202 23:40:03.723018   54469 shared_informer.go:204] Caches are synced for PVC protection 
E1202 23:40:03.723731   54469 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
+++ [1202 23:40:03] Testing kubectl version: check client only output matches expected output
I1202 23:40:03.979499   54469 shared_informer.go:204] Caches are synced for service account 
I1202 23:40:03.983262   50985 controller.go:606] quota admission added evaluator for: serviceaccounts
I1202 23:40:04.009818   54469 shared_informer.go:204] Caches are synced for ReplicationController 
Successful: the flag '--client' shows correct client info
(BSuccessful: the flag '--client' correctly has no server version info
(B+++ [1202 23:40:04] Testing kubectl version: verify json output
I1202 23:40:04.179469   54469 shared_informer.go:204] Caches are synced for disruption 
I1202 23:40:04.179500   54469 disruption.go:338] Sending events to api server.
I1202 23:40:04.198497   54469 shared_informer.go:204] Caches are synced for ReplicaSet 
I1202 23:40:04.199602   54469 shared_informer.go:204] Caches are synced for deployment 
I1202 23:40:04.220395   54469 shared_informer.go:204] Caches are synced for HPA 
W1202 23:40:04.232817   54469 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
Successful: --output json has correct client info
(BSuccessful: --output json has correct server info
(BI1202 23:40:04.277926   54469 shared_informer.go:204] Caches are synced for resource quota 
+++ [1202 23:40:04] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I1202 23:40:04.285113   54469 shared_informer.go:204] Caches are synced for persistent volume 
I1202 23:40:04.298161   54469 shared_informer.go:204] Caches are synced for garbage collector 
... skipping 63 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [1202 23:40:07] Creating namespace namespace-1575330007-23314
namespace/namespace-1575330007-23314 created
Context "test" modified.
+++ [1202 23:40:07] Testing RESTMapper
+++ [1202 23:40:08] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 601 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 12 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 188 lines ...
(Bpod/valid-pod patched
core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [1202 23:40:52] "kubectl patch with resourceVersion 530" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
W1202 23:40:53.047299   54469 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test created
node/node-v1-test replaced
core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(Bnode "node-v1-test" deleted
core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 23 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 85 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [1202 23:41:04] Creating namespace namespace-1575330064-16519
namespace/namespace-1575330064-16519 created
Context "test" modified.
+++ [1202 23:41:04] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [1202 23:41:05] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I1202 23:41:08.349822   50985 client.go:361] parsed scheme: "endpoint"
I1202 23:41:08.349897   50985 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1202 23:41:08.355121   50985 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 102 lines ...
Context "test" modified.
+++ [1202 23:41:11] Testing kubectl create filter
create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 30 lines ...
I1202 23:41:15.150160   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330072-15097", Name:"nginx-8484dd655", UID:"abb0cae9-e991-4ee7-887c-8991f09376b2", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-dvv9z
I1202 23:41:15.152958   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330072-15097", Name:"nginx-8484dd655", UID:"abb0cae9-e991-4ee7-887c-8991f09376b2", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-tf25k
I1202 23:41:15.155847   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330072-15097", Name:"nginx-8484dd655", UID:"abb0cae9-e991-4ee7-887c-8991f09376b2", APIVersion:"apps/v1", ResourceVersion:"628", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-5mhd4
apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
(BI1202 23:41:18.935137   54469 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1575330061-23731
Successful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1575330072-15097\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1575330072-15097"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I1202 23:41:24.802242   54469 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575330072-15097", Name:"nginx", UID:"db458d50-2653-4895-ab97-fdc758b835a9", APIVersion:"apps/v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I1202 23:41:24.806124   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330072-15097", Name:"nginx-668b6c7744", UID:"3989bf24-5aaa-4141-9b3e-16c53557023d", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-6mm5s
I1202 23:41:24.815391   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330072-15097", Name:"nginx-668b6c7744", UID:"3989bf24-5aaa-4141-9b3e-16c53557023d", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-xh7ct
I1202 23:41:24.816304   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330072-15097", Name:"nginx-668b6c7744", UID:"3989bf24-5aaa-4141-9b3e-16c53557023d", APIVersion:"apps/v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-r49fz
Successful
... skipping 141 lines ...
+++ [1202 23:41:32] Creating namespace namespace-1575330092-8170
namespace/namespace-1575330092-8170 created
Context "test" modified.
+++ [1202 23:41:32] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1575330092-8170 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1575330092-8170 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I1202 23:41:35.143678   64921 loader.go:375] Config loaded from file:  /tmp/tmp.uX29iY6hT4/.kube/config
I1202 23:41:35.145286   64921 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I1202 23:41:35.183261   64921 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
I1202 23:41:35.185486   64921 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 479 lines ...
Successful
message:NAME    DATA   AGE
one     0      0s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [1202 23:41:42] Creating namespace namespace-1575330102-26710
namespace/namespace-1575330102-26710 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-12-02T23:41:42Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1575330102-26710", "resourceVersion":"753", "selfLink":"/api/v1/namespaces/namespace-1575330102-26710/pods/valid-pod", "uid":"71088e96-31ef-4459-ac30-e42809c4e49c"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-12-02T23:41:42Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1575330102-26710","resourceVersion":"753","selfLink":"/api/v1/namespaces/namespace-1575330102-26710/pods/valid-pod","uid":"71088e96-31ef-4459-ac30-e42809c4e49c"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-12-02T23:41:42Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1575330102-26710 resourceVersion:753 selfLink:/api/v1/namespaces/namespace-1575330102-26710/pods/valid-pod uid:71088e96-31ef-4459-ac30-e42809c4e49c] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 45 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 42 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [1202 23:41:48] Creating namespace namespace-1575330108-13730
namespace/namespace-1575330108-13730 created
Context "test" modified.
+++ [1202 23:41:49] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [1202 23:41:49] Creating namespace namespace-1575330109-14032
namespace/namespace-1575330109-14032 created
Context "test" modified.
+++ [1202 23:41:49] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I1202 23:41:50.706745   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330109-14032", Name:"frontend", UID:"2a01975d-23dc-4da9-ac0b-577e913b304f", APIVersion:"apps/v1", ResourceVersion:"811", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mt7xq
I1202 23:41:50.710150   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330109-14032", Name:"frontend", UID:"2a01975d-23dc-4da9-ac0b-577e913b304f", APIVersion:"apps/v1", ResourceVersion:"811", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8cr7d
I1202 23:41:50.712128   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330109-14032", Name:"frontend", UID:"2a01975d-23dc-4da9-ac0b-577e913b304f", APIVersion:"apps/v1", ResourceVersion:"811", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rjzzj
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-8cr7d does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-8cr7d does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"535de60d-8e2c-4538-a4bb-9292bbed343a","resourceVersion":"833","creationTimestamp":"2019-12-02T23:41:52Z"}}
... skipping 2 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"535de60d-8e2c-4538-a4bb-9292bbed343a","resourceVersion":"834","creationTimestamp":"2019-12-02T23:41:52Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"535de60d-8e2c-4538-a4bb-9292bbed343a"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
valid-pod   0/1     Pending   0          0s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [1202 23:42:04] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 197 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
Recording: run_cmd_with_img_tests
... skipping 11 lines ...
I1202 23:42:20.205054   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330139-28794", Name:"test1-6cdffdb5b8", UID:"bf6395ef-5fd6-4a00-988a-62150cf9de2a", APIVersion:"apps/v1", ResourceVersion:"997", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-br74p
Successful
message:deployment.apps/test1 created
has:deployment.apps/test1 created
deployment.apps "test1" deleted
W1202 23:42:20.363927   50985 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E1202 23:42:20.366095   54469 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
W1202 23:42:20.476246   50985 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E1202 23:42:20.485252   54469 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [1202 23:42:20] Testing recursive resources
+++ [1202 23:42:20] Creating namespace namespace-1575330140-15817
W1202 23:42:20.603773   50985 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E1202 23:42:20.605446   54469 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1575330140-15817 created
Context "test" modified.
W1202 23:42:20.756587   50985 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E1202 23:42:20.758094   54469 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
E1202 23:42:21.367718   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:21.486893   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:21.607079   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E1202 23:42:21.759929   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:22.369158   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Name:         busybox0
Namespace:    namespace-1575330140-15817
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 153 lines ...
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E1202 23:42:22.488266   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:22.608891   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:22.761240   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1202 23:42:23.370874   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:23.490106   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx created
I1202 23:42:23.573794   54469 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575330140-15817", Name:"nginx", UID:"6c0903ce-4450-4f51-9b5b-48ecfed1b37c", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I1202 23:42:23.578538   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330140-15817", Name:"nginx-f87d999f7", UID:"679a231d-c08a-4f40-9ccf-edec4d1b2ab3", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-pr78w
I1202 23:42:23.583468   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330140-15817", Name:"nginx-f87d999f7", UID:"679a231d-c08a-4f40-9ccf-edec4d1b2ab3", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-rgbjg
I1202 23:42:23.584655   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330140-15817", Name:"nginx-f87d999f7", UID:"679a231d-c08a-4f40-9ccf-edec4d1b2ab3", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-xhv2z
E1202 23:42:23.609689   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE1202 23:42:23.762564   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bkubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I1202 23:42:24.042433   54469 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
(BSuccessful
... skipping 38 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:extensions/v1beta1
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:24.372339   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:24.491506   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E1202 23:42:24.611059   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E1202 23:42:24.763826   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
E1202 23:42:25.373961   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E1202 23:42:25.492937   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:25.612392   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:25.765157   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I1202 23:42:26.143268   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575330140-15817", Name:"busybox0", UID:"55e1b6d4-6cca-4448-a2ae-8dc66870ee3e", APIVersion:"v1", ResourceVersion:"1057", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-rfmgh
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1202 23:42:26.148171   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575330140-15817", Name:"busybox1", UID:"7c708efe-50aa-45ce-8e5c-af7c97ca67c1", APIVersion:"v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-tx6hh
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:26.375168   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:26.494502   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE1202 23:42:26.613742   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:26.766486   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:27.376533   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(BE1202 23:42:27.496119   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE1202 23:42:27.615560   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BE1202 23:42:27.768135   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI1202 23:42:28.344762   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575330140-15817", Name:"busybox0", UID:"55e1b6d4-6cca-4448-a2ae-8dc66870ee3e", APIVersion:"v1", ResourceVersion:"1079", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-m6kg6
I1202 23:42:28.353590   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575330140-15817", Name:"busybox1", UID:"7c708efe-50aa-45ce-8e5c-af7c97ca67c1", APIVersion:"v1", ResourceVersion:"1083", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qlw9h
E1202 23:42:28.377912   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(BE1202 23:42:28.497876   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E1202 23:42:28.617165   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1202 23:42:28.769549   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I1202 23:42:29.253790   54469 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575330140-15817", Name:"nginx1-deployment", UID:"c5b32cd7-5f3e-42b9-b237-6d8971d923cb", APIVersion:"apps/v1", ResourceVersion:"1099", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1202 23:42:29.262385   54469 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575330140-15817", Name:"nginx0-deployment", UID:"9bb9a053-eda6-4860-b711-abef757502cb", APIVersion:"apps/v1", ResourceVersion:"1101", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
I1202 23:42:29.262580   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330140-15817", Name:"nginx1-deployment-7bdbbfb5cf", UID:"74e8d4cc-54d7-408d-aa30-bf91f1683def", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-gbhcb
I1202 23:42:29.266266   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330140-15817", Name:"nginx0-deployment-57c6bff7f6", UID:"9fb70bf2-1870-4a8f-a7b6-2a74624d9921", APIVersion:"apps/v1", ResourceVersion:"1105", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-2fsl8
I1202 23:42:29.269186   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330140-15817", Name:"nginx1-deployment-7bdbbfb5cf", UID:"74e8d4cc-54d7-408d-aa30-bf91f1683def", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-z4tgq
I1202 23:42:29.271205   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575330140-15817", Name:"nginx0-deployment-57c6bff7f6", UID:"9fb70bf2-1870-4a8f-a7b6-2a74624d9921", APIVersion:"apps/v1", ResourceVersion:"1105", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-wwdm4
E1202 23:42:29.379445   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(BE1202 23:42:29.499221   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BE1202 23:42:29.618268   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
E1202 23:42:29.770803   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment resumed
deployment.apps/nginx0-deployment resumed
generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E1202 23:42:30.381020   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E1202 23:42:30.500723   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
E1202 23:42:30.619699   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:30.772594   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:31.382669   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:31.502420   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:31.620758   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1202 23:42:31.774056   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
I1202 23:42:31.924648   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575330140-15817", Name:"busybox0", UID:"222faf0e-9df6-4cab-ae92-6f1ab6912430", APIVersion:"v1", ResourceVersion:"1151", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-rnmrc
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1202 23:42:31.932139   54469 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575330140-15817", Name:"busybox1", UID:"5f63d664-53fc-4b3e-990f-eaf49c1ddc9b", APIVersion:"v1", ResourceVersion:"1153", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-lpjgg
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
E1202 23:42:32.383786   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
E1202 23:42:32.503603   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:32.622352   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:32.775395   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:33.385351   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_namespace_tests
Running command: run_namespace_tests
E1202 23:42:33.505181   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [1202 23:42:33] Testing kubectl(v1:namespaces)
E1202 23:42:33.623390   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace created
core.sh:1314: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BE1202 23:42:33.777145   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "my-namespace" deleted
E1202 23:42:34.387101   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:34.506764   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:34.624687   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:34.778532   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:35.388651   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:35.508219   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:35.626017   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:35.779974   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:36.390087   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:36.509567   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:42:36.602061   54469 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1202 23:42:36.602117   54469 shared_informer.go:204] Caches are synced for garbage collector 
E1202 23:42:36.627494   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:42:36.686372   54469 shared_informer.go:197] Waiting for caches to sync for resource quota
I1202 23:42:36.686804   54469 shared_informer.go:204] Caches are synced for resource quota 
E1202 23:42:36.781228   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:37.391409   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:37.510793   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:37.628981   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:37.782633   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:38.392738   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:38.512416   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:38.630531   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:38.784270   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1323: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BE1202 23:42:39.394127   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1575330004-13763" deleted
namespace "namespace-1575330007-23314" deleted
... skipping 26 lines ...
namespace "namespace-1575330113-26253" deleted
namespace "namespace-1575330114-21143" deleted
namespace "namespace-1575330117-6599" deleted
namespace "namespace-1575330118-4038" deleted
namespace "namespace-1575330139-28794" deleted
namespace "namespace-1575330140-15817" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1575330004-13763" deleted
... skipping 27 lines ...
namespace "namespace-1575330113-26253" deleted
namespace "namespace-1575330114-21143" deleted
namespace "namespace-1575330117-6599" deleted
namespace "namespace-1575330118-4038" deleted
namespace "namespace-1575330139-28794" deleted
namespace "namespace-1575330140-15817" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
E1202 23:42:39.514238   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1335: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(BE1202 23:42:39.631739   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/other created
E1202 23:42:39.786165   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1339: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1343: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1347: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BE1202 23:42:40.396428   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1349: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BE1202 23:42:40.515542   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
E1202 23:42:40.633002   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1356: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BE1202 23:42:40.788244   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1360: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
E1202 23:42:41.400720   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:41.517113   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:41.634251   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:42:41.713497   54469 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1575330140-15817
I1202 23:42:41.718343   54469 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1575330140-15817
E1202 23:42:41.789954   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:42.402015   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:42.518497   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:42.635847   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:42.791323   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:43.403546   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:43.520050   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:43.637650   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:43.792459   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:44.404343   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:44.521646   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:44.639079   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:44.793720   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:45.406087   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:45.523369   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:45.640485   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:45.795413   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_secrets_test
Running command: run_secrets_test

+++ Running case: test-cmd.run_secrets_test 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_secrets_test
+++ [1202 23:42:46] Creating namespace namespace-1575330166-6837
namespace/namespace-1575330166-6837 created
E1202 23:42:46.407384   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [1202 23:42:46] Testing secrets
E1202 23:42:46.524957   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:42:46.528630   71347 loader.go:375] Config loaded from file:  /tmp/tmp.uX29iY6hT4/.kube/config
Successful
message:apiVersion: v1
data:
  key1: dmFsdWUx
kind: Secret
... skipping 25 lines ...
  key1: dmFsdWUx
kind: Secret
metadata:
  creationTimestamp: null
  name: test
has not:example.com
E1202 23:42:46.642114   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:725: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
(Bnamespace/test-secrets created
E1202 23:42:46.797218   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
(Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
(BE1202 23:42:47.408552   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E1202 23:42:47.526278   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1202 23:42:47.643476   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
E1202 23:42:47.798622   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(Bsecret "test-secret" deleted
core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
E1202 23:42:48.410109   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(BE1202 23:42:48.527839   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(BE1202 23:42:48.644912   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
secret/test-secret created
E1202 23:42:48.800124   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(BI1202 23:42:49.048969   54469 namespace_controller.go:185] Namespace has been deleted my-namespace
secret "test-secret" deleted
secret/secret-string-data created
core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(BE1202 23:42:49.411522   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(BE1202 23:42:49.529470   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:42:49.589328   54469 namespace_controller.go:185] Namespace has been deleted kube-node-lease
I1202 23:42:49.590352   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330007-23314
I1202 23:42:49.595541   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330024-14855
I1202 23:42:49.597827   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330030-8360
core.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(BI1202 23:42:49.605997   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330031-10740
I1202 23:42:49.633097   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330013-23323
I1202 23:42:49.636917   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330029-28545
I1202 23:42:49.643067   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330004-13763
I1202 23:42:49.645210   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330020-29723
E1202 23:42:49.646177   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:42:49.651267   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330024-29611
secret "secret-string-data" deleted
E1202 23:42:49.801363   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BI1202 23:42:49.824419   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330056-10074
I1202 23:42:49.830095   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330057-5501
I1202 23:42:49.839055   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330041-28221
I1202 23:42:49.852436   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330064-16519
I1202 23:42:49.855409   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330042-9316
... skipping 16 lines ...
I1202 23:42:50.198484   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330109-14032
I1202 23:42:50.276566   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330117-6599
I1202 23:42:50.278540   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330114-21143
I1202 23:42:50.296434   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330118-4038
I1202 23:42:50.303663   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330139-28794
I1202 23:42:50.357740   54469 namespace_controller.go:185] Namespace has been deleted namespace-1575330140-15817
E1202 23:42:50.413350   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:50.530658   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:50.657188   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:50.802797   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:42:51.139779   54469 namespace_controller.go:185] Namespace has been deleted other
E1202 23:42:51.414816   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:51.532152   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:51.658514   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:51.804452   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:52.416514   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:52.533551   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:52.660130   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:52.805596   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:53.418385   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:53.534825   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:53.661773   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:53.807015   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:54.419660   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:54.536328   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:54.663367   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:54.808525   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_configmap_tests
+++ [1202 23:42:55] Creating namespace namespace-1575330175-15819
E1202 23:42:55.421103   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1575330175-15819 created
Context "test" modified.
E1202 23:42:55.537706   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [1202 23:42:55] Testing configmaps
E1202 23:42:55.664776   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap/test-configmap created
E1202 23:42:55.809719   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(Bconfigmap "test-configmap" deleted
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(BE1202 23:42:56.422419   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(BE1202 23:42:56.539411   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(BE1202 23:42:56.666592   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap/test-configmap created
configmap/test-binary-configmap created
E1202 23:42:56.811236   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(Bconfigmap "test-configmap" deleted
E1202 23:42:57.423740   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap "test-binary-configmap" deleted
E1202 23:42:57.540811   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-configmaps" deleted
E1202 23:42:57.667826   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:57.812516   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:58.425324   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:58.542257   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:58.669437   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:58.813988   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:59.426741   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:59.543594   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:59.670802   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:42:59.814827   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:43:00.249643   54469 namespace_controller.go:185] Namespace has been deleted test-secrets
E1202 23:43:00.428256   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:00.545525   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:00.672650   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:00.816281   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:01.430100   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:01.547207   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:01.673958   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:01.817440   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:02.432145   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:02.549400   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:02.675068   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [1202 23:43:02] Creating namespace namespace-1575330182-19498
E1202 23:43:02.818965   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1575330182-19498 created
Context "test" modified.
+++ [1202 23:43:02] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
E1202 23:43:03.433574   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
E1202 23:43:03.551115   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
E1202 23:43:03.676363   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
E1202 23:43:03.820079   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 2 lines ...
+++ [1202 23:43:03] Creating namespace namespace-1575330183-30407
namespace/namespace-1575330183-30407 created
Context "test" modified.
+++ [1202 23:43:04] Testing service accounts
core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
(Bnamespace/test-service-accounts created
E1202 23:43:04.435017   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
(BE1202 23:43:04.552382   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
serviceaccount/test-service-account created
core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
(BE1202 23:43:04.678258   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
serviceaccount "test-service-account" deleted
E1202 23:43:04.821442   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-service-accounts" deleted
E1202 23:43:05.436569   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:05.554035   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:05.679762   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:05.823754   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:06.438426   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:06.555528   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:06.681340   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:06.825190   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:07.439728   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:07.556797   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:43:07.669992   54469 namespace_controller.go:185] Namespace has been deleted test-configmaps
E1202 23:43:07.682837   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:07.826650   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:08.441114   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:08.558134   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:08.684394   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:08.828450   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:09.442407   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:09.559397   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:09.686329   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:09.829828   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_job_tests
Running command: run_job_tests

+++ Running case: test-cmd.run_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_job_tests
+++ [1202 23:43:10] Creating namespace namespace-1575330190-10987
namespace/namespace-1575330190-10987 created
Context "test" modified.
+++ [1202 23:43:10] Testing job
batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
(BE1202 23:43:10.443805   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/test-jobs created
E1202 23:43:10.561338   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
(BE1202 23:43:10.688352   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/pi created
E1202 23:43:10.831242   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
(BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pi     59 23 31 2 *   False     0        <none>          1s
Name:                          pi
Namespace:                     test-jobs
Labels:                        run=pi
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  run=pi
... skipping 17 lines ...
Active Jobs:         <none>
Events:              <none>
Successful
message:job.batch/test-job
has:job.batch/test-job
batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
(BE1202 23:43:11.445168   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:43:11.493056   54469 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"0bd7b016-f85f-4f84-8082-1ed5870508d0", APIVersion:"batch/v1", ResourceVersion:"1493", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-vgdcf
job.batch/test-job created
E1202 23:43:11.563183   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
(BE1202 23:43:11.690053   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
NAME       COMPLETIONS   DURATION   AGE
test-job   0/1           0s         0s
Name:           test-job
Namespace:      test-jobs
Selector:       controller-uid=0bd7b016-f85f-4f84-8082-1ed5870508d0
Labels:         controller-uid=0bd7b016-f85f-4f84-8082-1ed5870508d0
                job-name=test-job
                run=pi
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Mon, 02 Dec 2019 23:43:11 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=0bd7b016-f85f-4f84-8082-1ed5870508d0
           job-name=test-job
           run=pi
  Containers:
   pi:
... skipping 12 lines ...
    Mounts:       <none>
  Volumes:        <none>
Events:
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-vgdcf
E1202 23:43:11.833670   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
job.batch "test-job" deleted
cronjob.batch "pi" deleted
namespace "test-jobs" deleted
E1202 23:43:12.446697   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:12.564895   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:12.691340   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:12.835415   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:13.448282   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:13.566482   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:13.693089   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:13.837300   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:14.449982   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:14.568416   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:14.694827   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:14.838816   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:43:14.971797   54469 namespace_controller.go:185] Namespace has been deleted test-service-accounts
E1202 23:43:15.451673   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:15.569667   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:15.696706   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:15.840303   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:16.453558   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:16.571467   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:16.698246   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 23:43:16.842183   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_create_job_tests
Running command: run_create_job_tests

+++ Running case: test-cmd.run_create_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_job_tests
+++ [1202 23:43:17] Creating namespace namespace-1575330197-12447
namespace/namespace-1575330197-12447 created
E1202 23:43:17.455021   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
E1202 23:43:17.572831   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:43:17.620615   54469 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1575330197-12447", Name:"test-job", UID:"f775b25b-c796-451f-8823-86549337a37f", APIVersion:"batch/v1", ResourceVersion:"1515", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-6zdth
job.batch/test-job created
E1202 23:43:17.699993   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
(Bjob.batch "test-job" deleted
E1202 23:43:17.843440   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1202 23:43:17.934711   54469 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1575330197-12447", Name:"test-job-pi", UID:"17cd1f4b-ca58-4ffd-aca4-7e99fd5bd5c0", APIVersion:"batch/v1", ResourceVersion:"1522", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-j95qr
job.batch/test-job-pi created
create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
(Bjob.batch "test-job-pi" deleted
kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/test-pi created
I1202 23:43:18.372100   54469 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1575330197-12447", Name:"my-pi", UID:"9c92bd52-4167-40f8-9692-6edc8b08697e", APIVersion:"batch/v1", ResourceVersion:"1532", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-j6gkm
job.batch/my-pi created
E1202 23:43:18.456516   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:[perl -Mbignum=bpi -wle print bpi(10)]
has:perl -Mbignum=bpi -wle print bpi(10)
job.batch "my-pi" deleted
E1202 23:43:18.573826   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
cronjob.batch "test-pi" deleted
+++ exit code: 0
E1202 23:43:18.701480   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_pod_templates_tests
Running command: run_pod_templates_tests

+++ Running case: test-cmd.run_pod_templates_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_pod_templates_tests
+++ [1202 23:43:18] Creating namespace namespace-1575330198-13031
E1202 23:43:18.844698   54469 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1575330198-13031 created
Context "test" modified.
+++ [1202 23:43:18] Testing pod templates
core.sh:1421: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
(BI1202 23:43:19.286093   50985 controller.go:606] quota admission added evaluator for: podtemplates
podtemplate/nginx created
core.sh:1425: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE1202 23:43:19.458185   54469 reflector.go:156] k8s.io/client-go/metadata