This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 2862 succeeded
Started2019-09-12 16:29
Elapsed27m35s
Revision
Buildergke-prow-ssd-pool-1a225945-bfs0
Refs master:b3c4bdea
81703:9b34fb0b
82600:84070403
82602:75888077
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/462e3972-0a9b-4434-8828-ef22b884a2e6/targets/test'}}
pod68c47fb1-d57a-11e9-ad08-968d9a0b984c
resultstorehttps://source.cloud.google.com/results/invocations/462e3972-0a9b-4434-8828-ef22b884a2e6/targets/test
infra-commit4707880c9
pod68c47fb1-d57a-11e9-ad08-968d9a0b984c
repok8s.io/kubernetes
repo-commit0e6a8d710b4568b238b5b14d5d5efd58472a74fe
repos{u'k8s.io/kubernetes': u'master:b3c4bdea496c0e808ad761d6c387fcd6838dea99,81703:9b34fb0b627196a9d6b6d15025ff6dbd27c34365,82600:84070403dad60237c7798978e5bcf8b8329ed790,82602:75888077d34b1312d7a9547565f2e9d16819b52b'}

Test Failures


k8s.io/kubernetes/test/integration/scheduler TestNodePIDPressure 33s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestNodePIDPressure$
=== RUN   TestNodePIDPressure
W0912 16:52:48.225138  108901 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I0912 16:52:48.225155  108901 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I0912 16:52:48.225171  108901 master.go:303] Node port range unspecified. Defaulting to 30000-32767.
I0912 16:52:48.225179  108901 master.go:259] Using reconciler: 
I0912 16:52:48.226851  108901 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.227046  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.227070  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.228010  108901 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I0912 16:52:48.228045  108901 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.228481  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.228509  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.228639  108901 reflector.go:158] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I0912 16:52:48.230045  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.230350  108901 store.go:1342] Monitoring events count at <storage-prefix>//events
I0912 16:52:48.230383  108901 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.230425  108901 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0912 16:52:48.231300  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.232000  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.232108  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.233153  108901 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I0912 16:52:48.233185  108901 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.233309  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.233325  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.233406  108901 reflector.go:158] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I0912 16:52:48.234070  108901 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I0912 16:52:48.234117  108901 reflector.go:158] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I0912 16:52:48.234288  108901 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.234406  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.234426  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.234700  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.234905  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.235017  108901 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I0912 16:52:48.235237  108901 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.235298  108901 reflector.go:158] Listing and watching *core.Secret from storage/cacher.go:/secrets
I0912 16:52:48.235391  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.235413  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.236364  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.236388  108901 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I0912 16:52:48.236428  108901 reflector.go:158] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I0912 16:52:48.236557  108901 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.236712  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.236731  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.237970  108901 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I0912 16:52:48.238093  108901 reflector.go:158] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I0912 16:52:48.238172  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.238214  108901 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.238350  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.238367  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.239299  108901 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I0912 16:52:48.239316  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.239341  108901 reflector.go:158] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I0912 16:52:48.239506  108901 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.239644  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.239664  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.240371  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.249560  108901 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I0912 16:52:48.249770  108901 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.250017  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.250025  108901 reflector.go:158] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I0912 16:52:48.250051  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.251986  108901 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I0912 16:52:48.251771  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.252035  108901 reflector.go:158] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I0912 16:52:48.252301  108901 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.252453  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.252707  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.254070  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.254738  108901 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I0912 16:52:48.254782  108901 reflector.go:158] Listing and watching *core.Node from storage/cacher.go:/minions
I0912 16:52:48.255487  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.255699  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.255826  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.255844  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.256966  108901 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I0912 16:52:48.257192  108901 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.257248  108901 reflector.go:158] Listing and watching *core.Pod from storage/cacher.go:/pods
I0912 16:52:48.257336  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.257359  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.258018  108901 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I0912 16:52:48.258190  108901 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.258341  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.258367  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.258456  108901 reflector.go:158] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I0912 16:52:48.258899  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.259920  108901 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I0912 16:52:48.259972  108901 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.260134  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.260153  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.260259  108901 reflector.go:158] Listing and watching *core.Service from storage/cacher.go:/services/specs
I0912 16:52:48.260897  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.262320  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.262346  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.263283  108901 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.263421  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.263443  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.264065  108901 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I0912 16:52:48.264088  108901 rest.go:115] the default service ipfamily for this cluster is: IPv4
I0912 16:52:48.264547  108901 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.264765  108901 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.265580  108901 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.265921  108901 reflector.go:158] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I0912 16:52:48.266685  108901 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.267982  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.269741  108901 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.270839  108901 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.271569  108901 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.271746  108901 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.272289  108901 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.272724  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.273505  108901 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.274517  108901 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.274864  108901 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.276253  108901 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.276735  108901 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.277505  108901 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.277910  108901 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.278945  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.279280  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.279513  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.279723  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.280008  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.280279  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.280557  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.281798  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.282256  108901 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.283318  108901 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.284544  108901 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.285006  108901 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.285394  108901 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.286324  108901 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.286706  108901 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.287805  108901 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.288676  108901 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.289862  108901 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.290886  108901 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.291312  108901 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.291524  108901 master.go:450] Skipping disabled API group "auditregistration.k8s.io".
I0912 16:52:48.291615  108901 master.go:461] Enabling API group "authentication.k8s.io".
I0912 16:52:48.291700  108901 master.go:461] Enabling API group "authorization.k8s.io".
I0912 16:52:48.291965  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.292236  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.292380  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.296140  108901 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0912 16:52:48.296369  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.296512  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.296542  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.296653  108901 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0912 16:52:48.298522  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.298907  108901 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0912 16:52:48.299104  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.299266  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.299291  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.299400  108901 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0912 16:52:48.300542  108901 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I0912 16:52:48.300827  108901 master.go:461] Enabling API group "autoscaling".
I0912 16:52:48.300569  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.300658  108901 reflector.go:158] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I0912 16:52:48.302514  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.303367  108901 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.303520  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.303545  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.304542  108901 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I0912 16:52:48.304583  108901 reflector.go:158] Listing and watching *batch.Job from storage/cacher.go:/jobs
I0912 16:52:48.304730  108901 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.304869  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.304890  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.305969  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.306013  108901 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I0912 16:52:48.306034  108901 master.go:461] Enabling API group "batch".
I0912 16:52:48.306410  108901 reflector.go:158] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I0912 16:52:48.306406  108901 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.306548  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.306566  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.307503  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.307804  108901 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I0912 16:52:48.307826  108901 master.go:461] Enabling API group "certificates.k8s.io".
I0912 16:52:48.308012  108901 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.308168  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.308187  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.308218  108901 reflector.go:158] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I0912 16:52:48.309103  108901 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0912 16:52:48.309232  108901 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0912 16:52:48.309288  108901 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.309413  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.309430  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.310103  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.310172  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.310506  108901 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I0912 16:52:48.310521  108901 master.go:461] Enabling API group "coordination.k8s.io".
I0912 16:52:48.310534  108901 master.go:450] Skipping disabled API group "discovery.k8s.io".
I0912 16:52:48.310576  108901 reflector.go:158] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I0912 16:52:48.310706  108901 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.310802  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.310823  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.311544  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.312026  108901 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0912 16:52:48.312049  108901 master.go:461] Enabling API group "extensions".
I0912 16:52:48.312177  108901 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0912 16:52:48.312219  108901 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.312341  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.312356  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.313086  108901 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I0912 16:52:48.313245  108901 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.313378  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.313395  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.313659  108901 reflector.go:158] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I0912 16:52:48.313731  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.314709  108901 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I0912 16:52:48.314732  108901 master.go:461] Enabling API group "networking.k8s.io".
I0912 16:52:48.314758  108901 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.314850  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.314865  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.315005  108901 reflector.go:158] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I0912 16:52:48.315309  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.316105  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.319524  108901 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I0912 16:52:48.319545  108901 master.go:461] Enabling API group "node.k8s.io".
I0912 16:52:48.319608  108901 reflector.go:158] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I0912 16:52:48.319753  108901 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.319863  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.319880  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.320401  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.321604  108901 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I0912 16:52:48.321660  108901 reflector.go:158] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I0912 16:52:48.321847  108901 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.321995  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.322024  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.322859  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.323523  108901 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I0912 16:52:48.323661  108901 master.go:461] Enabling API group "policy".
I0912 16:52:48.323565  108901 reflector.go:158] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I0912 16:52:48.323739  108901 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.323883  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.323900  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.324477  108901 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0912 16:52:48.324690  108901 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.324848  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.324868  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.324958  108901 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0912 16:52:48.325251  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.325818  108901 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0912 16:52:48.325848  108901 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.325983  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.326001  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.326134  108901 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0912 16:52:48.326514  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.327083  108901 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0912 16:52:48.327271  108901 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.327406  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.327430  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.327509  108901 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0912 16:52:48.327905  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.328963  108901 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0912 16:52:48.328989  108901 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0912 16:52:48.329001  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.329010  108901 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.329172  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.329195  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.329839  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.329985  108901 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I0912 16:52:48.330075  108901 reflector.go:158] Listing and watching *rbac.Role from storage/cacher.go:/roles
I0912 16:52:48.330344  108901 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.330447  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.330466  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.330759  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.331106  108901 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I0912 16:52:48.331144  108901 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.331255  108901 reflector.go:158] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I0912 16:52:48.331262  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.331327  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.332005  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.332028  108901 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I0912 16:52:48.332155  108901 reflector.go:158] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I0912 16:52:48.332244  108901 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.332385  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.332400  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.333319  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.333458  108901 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I0912 16:52:48.333497  108901 master.go:461] Enabling API group "rbac.authorization.k8s.io".
I0912 16:52:48.333625  108901 reflector.go:158] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I0912 16:52:48.334617  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.335905  108901 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.336077  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.336115  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.337084  108901 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0912 16:52:48.337158  108901 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0912 16:52:48.337275  108901 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.337420  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.337445  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.338631  108901 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I0912 16:52:48.338658  108901 master.go:461] Enabling API group "scheduling.k8s.io".
I0912 16:52:48.338804  108901 master.go:450] Skipping disabled API group "settings.k8s.io".
I0912 16:52:48.338985  108901 reflector.go:158] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I0912 16:52:48.338985  108901 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.339267  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.339285  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.339808  108901 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0912 16:52:48.340013  108901 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.340130  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.340146  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.340234  108901 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0912 16:52:48.340325  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.341870  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.342195  108901 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0912 16:52:48.342227  108901 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.342343  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.342367  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.342392  108901 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0912 16:52:48.342870  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.343966  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.344829  108901 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I0912 16:52:48.344868  108901 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.345042  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.345048  108901 reflector.go:158] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I0912 16:52:48.345060  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.345833  108901 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I0912 16:52:48.346208  108901 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.346313  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.346342  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.346429  108901 reflector.go:158] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I0912 16:52:48.347831  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.347988  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.348948  108901 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I0912 16:52:48.349176  108901 reflector.go:158] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I0912 16:52:48.350337  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.351389  108901 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.351690  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.351845  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.352854  108901 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I0912 16:52:48.353103  108901 master.go:461] Enabling API group "storage.k8s.io".
I0912 16:52:48.353051  108901 reflector.go:158] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I0912 16:52:48.354836  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.355766  108901 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.356081  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.356209  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.357403  108901 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I0912 16:52:48.357771  108901 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.357899  108901 reflector.go:158] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I0912 16:52:48.359703  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.360325  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.361235  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.362817  108901 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I0912 16:52:48.363299  108901 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.363070  108901 reflector.go:158] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I0912 16:52:48.364910  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.366355  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.366136  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.375717  108901 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I0912 16:52:48.376343  108901 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.378288  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.376095  108901 reflector.go:158] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I0912 16:52:48.378598  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.380822  108901 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I0912 16:52:48.381306  108901 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.381018  108901 reflector.go:158] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I0912 16:52:48.384348  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.385333  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.385503  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.387002  108901 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I0912 16:52:48.387307  108901 master.go:461] Enabling API group "apps".
I0912 16:52:48.389513  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.387197  108901 reflector.go:158] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I0912 16:52:48.391821  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.392521  108901 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.393831  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.393990  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.402135  108901 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0912 16:52:48.402453  108901 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.404079  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.402156  108901 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0912 16:52:48.404330  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.405978  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.407155  108901 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0912 16:52:48.407276  108901 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0912 16:52:48.407345  108901 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.408479  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.409113  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.409241  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.410326  108901 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I0912 16:52:48.410407  108901 reflector.go:158] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I0912 16:52:48.411481  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.412388  108901 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.412665  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.412710  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.413760  108901 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I0912 16:52:48.413840  108901 reflector.go:158] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I0912 16:52:48.414977  108901 master.go:461] Enabling API group "admissionregistration.k8s.io".
I0912 16:52:48.415147  108901 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.415560  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.415730  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:48.415758  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:48.416571  108901 store.go:1342] Monitoring events count at <storage-prefix>//events
I0912 16:52:48.416600  108901 master.go:461] Enabling API group "events.k8s.io".
I0912 16:52:48.416824  108901 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.417236  108901 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.417536  108901 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.417651  108901 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.417772  108901 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.417859  108901 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.418055  108901 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.418153  108901 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.418275  108901 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.418387  108901 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.419122  108901 reflector.go:158] Listing and watching *core.Event from storage/cacher.go:/events
I0912 16:52:48.419546  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.419837  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.420108  108901 watch_cache.go:405] Replace watchCache (rev: 30499) 
I0912 16:52:48.421319  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.421922  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.423568  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.424020  108901 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.425066  108901 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.425639  108901 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.426815  108901 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.427237  108901 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 16:52:48.427383  108901 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I0912 16:52:48.428451  108901 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.428794  108901 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.429187  108901 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.430354  108901 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.431872  108901 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.433202  108901 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.433616  108901 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.435179  108901 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.436140  108901 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.436673  108901 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.437874  108901 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 16:52:48.438127  108901 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I0912 16:52:48.439204  108901 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.439698  108901 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.440949  108901 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.441873  108901 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.442582  108901 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.443805  108901 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.444844  108901 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.445972  108901 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.446719  108901 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.447831  108901 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.449092  108901 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 16:52:48.449344  108901 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I0912 16:52:48.450194  108901 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.451100  108901 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 16:52:48.451484  108901 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I0912 16:52:48.452563  108901 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.453586  108901 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.454152  108901 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.455093  108901 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.456040  108901 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.456858  108901 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.457653  108901 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 16:52:48.457872  108901 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I0912 16:52:48.459147  108901 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.460107  108901 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.460557  108901 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.461686  108901 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.462251  108901 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.462765  108901 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.463819  108901 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.464356  108901 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.465097  108901 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.466223  108901 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.466721  108901 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.467209  108901 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W0912 16:52:48.467442  108901 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0912 16:52:48.467522  108901 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0912 16:52:48.468611  108901 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.469508  108901 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.470671  108901 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.471615  108901 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.472626  108901 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"831eb8a2-b75c-43e1-9716-f2523acf3289", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I0912 16:52:48.477283  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.477317  108901 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I0912 16:52:48.477328  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.477339  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.477348  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.477356  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.477389  108901 httplog.go:90] GET /healthz: (239.02µs) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:48.478864  108901 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.378196ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43400]
I0912 16:52:48.481446  108901 httplog.go:90] GET /api/v1/services: (1.085404ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43400]
I0912 16:52:48.485020  108901 httplog.go:90] GET /api/v1/services: (965.459µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43400]
I0912 16:52:48.487349  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.487375  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.487387  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.487396  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.487405  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.487428  108901 httplog.go:90] GET /healthz: (176.407µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.488616  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.316123ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43400]
I0912 16:52:48.491788  108901 httplog.go:90] POST /api/v1/namespaces: (1.891008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43404]
I0912 16:52:48.491825  108901 httplog.go:90] GET /api/v1/services: (1.919458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43400]
I0912 16:52:48.492014  108901 httplog.go:90] GET /api/v1/services: (3.193996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.493677  108901 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.037399ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.495540  108901 httplog.go:90] POST /api/v1/namespaces: (1.384917ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.497172  108901 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.291194ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.498868  108901 httplog.go:90] POST /api/v1/namespaces: (1.368652ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.579280  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.579318  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.579330  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.579340  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.579347  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.579388  108901 httplog.go:90] GET /healthz: (273.01µs) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:48.588144  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.588177  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.588189  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.588197  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.588205  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.588234  108901 httplog.go:90] GET /healthz: (243.464µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.683285  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.683335  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.683355  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.683365  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.683375  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.683415  108901 httplog.go:90] GET /healthz: (317.327µs) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:48.688668  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.688705  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.688719  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.688729  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.688738  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.688779  108901 httplog.go:90] GET /healthz: (266.013µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.778575  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.778619  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.778632  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.778640  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.778648  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.778687  108901 httplog.go:90] GET /healthz: (260.634µs) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:48.788202  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.788242  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.788253  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.788263  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.788271  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.788300  108901 httplog.go:90] GET /healthz: (260.444µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.878529  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.878571  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.878585  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.878594  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.878602  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.878639  108901 httplog.go:90] GET /healthz: (262.047µs) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:48.888273  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.888311  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.888323  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.888332  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.888340  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.888384  108901 httplog.go:90] GET /healthz: (280.935µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:48.978480  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.978513  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.978524  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.978534  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.978542  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.978592  108901 httplog.go:90] GET /healthz: (239.814µs) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:48.988102  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:48.988152  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:48.988165  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:48.988175  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:48.988183  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:48.988218  108901 httplog.go:90] GET /healthz: (282.01µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.078736  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:49.078767  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.078779  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.078789  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.078797  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.078828  108901 httplog.go:90] GET /healthz: (238.275µs) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:49.088254  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:49.088287  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.088300  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.088311  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.088319  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.088357  108901 httplog.go:90] GET /healthz: (249.613µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.178652  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:49.178686  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.178709  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.178722  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.178730  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.178774  108901 httplog.go:90] GET /healthz: (254.416µs) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:49.188176  108901 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I0912 16:52:49.188210  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.188222  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.188232  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.188240  108901 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.188286  108901 httplog.go:90] GET /healthz: (268.131µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.225604  108901 client.go:361] parsed scheme: "endpoint"
I0912 16:52:49.225696  108901 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I0912 16:52:49.280475  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.280504  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.280514  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.280523  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.280569  108901 httplog.go:90] GET /healthz: (1.492349ms) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:49.289149  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.289179  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.289193  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.289201  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.289239  108901 httplog.go:90] GET /healthz: (1.305409ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.379550  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.379583  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.379594  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.379603  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.379641  108901 httplog.go:90] GET /healthz: (1.182635ms) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:49.392825  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.392854  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.392870  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.392883  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.392942  108901 httplog.go:90] GET /healthz: (1.58263ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.479525  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.479554  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.479566  108901 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I0912 16:52:49.479575  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I0912 16:52:49.479621  108901 httplog.go:90] GET /healthz: (1.153484ms) 0 [Go-http-client/1.1 127.0.0.1:43398]
I0912 16:52:49.484681  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.408068ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.484981  108901 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.918366ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.485226  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.90465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.487551  108901 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (1.887481ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.487782  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.933762ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.487805  108901 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (2.407084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.488035  108901 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I0912 16:52:49.490329  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.94799ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.490572  108901 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.197954ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.490330  108901 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (2.151098ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.496883  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.496917  108901 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I0912 16:52:49.496945  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.496992  108901 httplog.go:90] GET /healthz: (6.383817ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43414]
I0912 16:52:49.497607  108901 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (6.366757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.497855  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (6.688721ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.498420  108901 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I0912 16:52:49.498436  108901 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0912 16:52:49.500329  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.819764ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43398]
I0912 16:52:49.502243  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (1.521933ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.503715  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.126655ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.505077  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (849.32µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.506468  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.092866ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.507772  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (1.011889ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.510679  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.738976ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.511050  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I0912 16:52:49.512293  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.052059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.515246  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.403689ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.515489  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I0912 16:52:49.516514  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (821.518µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.518922  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.00166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.519291  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I0912 16:52:49.520338  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (835.167µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.522639  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.842811ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.522819  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I0912 16:52:49.523969  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (879.582µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.526721  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.625608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.526911  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I0912 16:52:49.527850  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (758.479µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.529946  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639309ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.530148  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I0912 16:52:49.531420  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.11451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.533115  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.369928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.533375  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I0912 16:52:49.534258  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (740.628µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.536051  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49897ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.536386  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I0912 16:52:49.537380  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (833.213µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.539498  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.764403ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.539731  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I0912 16:52:49.540761  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (880.578µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.543127  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.882422ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.543533  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I0912 16:52:49.544605  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (846.929µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.546571  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.530609ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.546833  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I0912 16:52:49.547999  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (924.841µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.551089  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.349019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.551548  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I0912 16:52:49.553893  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.356531ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.557271  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.247143ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.557637  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I0912 16:52:49.558769  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (917.038µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.561175  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.950461ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.561677  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I0912 16:52:49.563704  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.820913ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.565815  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.596191ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.566228  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I0912 16:52:49.567633  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.172522ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.570547  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.978315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.570941  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I0912 16:52:49.572035  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (924.045µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.573849  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.475874ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.574035  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I0912 16:52:49.575072  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (837.764µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.577318  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.849335ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.577590  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I0912 16:52:49.578577  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (811.729µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.580818  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.674092ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.581025  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I0912 16:52:49.581797  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.581820  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.581848  108901 httplog.go:90] GET /healthz: (3.303637ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:49.582989  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.552897ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.585015  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.653501ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.585317  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I0912 16:52:49.586303  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (817.393µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.588894  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.870376ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.589120  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I0912 16:52:49.589755  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.589776  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.589809  108901 httplog.go:90] GET /healthz: (1.360835ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.590215  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (801.914µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.592306  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.590429ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.592611  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I0912 16:52:49.593670  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (857.305µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.595632  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.545445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.595820  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I0912 16:52:49.596835  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (809.691µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.598859  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.630104ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.599174  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I0912 16:52:49.600220  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (875.735µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.602096  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.505727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.602400  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I0912 16:52:49.603436  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (811.092µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.605838  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.916741ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.606100  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I0912 16:52:49.607127  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (834.539µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.609216  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.719018ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.609591  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I0912 16:52:49.610742  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.009022ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.613373  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.155237ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.613790  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0912 16:52:49.614916  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (916.631µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.617281  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.889087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.617542  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0912 16:52:49.618758  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.017663ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.621254  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.077311ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.621642  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0912 16:52:49.624240  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (2.421247ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.628407  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.376887ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.628705  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0912 16:52:49.631036  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (2.15678ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.633525  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.026344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.633872  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I0912 16:52:49.635611  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.439517ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.638897  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.8024ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.639141  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I0912 16:52:49.640417  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.010017ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.643075  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.249964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.643337  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0912 16:52:49.645029  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.417915ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.647406  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.991803ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.647710  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I0912 16:52:49.649331  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (1.179764ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.658221  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.25285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.659061  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0912 16:52:49.663773  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (4.028524ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.666865  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.511176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.667357  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0912 16:52:49.668907  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.303208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.672083  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.609416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.672371  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I0912 16:52:49.673798  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.20707ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.676664  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.05119ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.676918  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I0912 16:52:49.679020  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.679051  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.679101  108901 httplog.go:90] GET /healthz: (910.685µs) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:49.679143  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.735665ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.681700  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.792485ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.681996  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I0912 16:52:49.683232  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (999.205µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.685506  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.863673ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.685776  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0912 16:52:49.687064  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (887.061µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.689247  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.689283  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.689319  108901 httplog.go:90] GET /healthz: (872.168µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.689345  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.86428ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:49.689507  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0912 16:52:49.690897  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.221719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.693224  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.698173ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.693676  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0912 16:52:49.695011  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.118399ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.697703  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.095651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.698226  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I0912 16:52:49.700020  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (1.394121ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.704131  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.610285ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.704388  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0912 16:52:49.705606  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.00985ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.708073  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.950202ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.708319  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I0912 16:52:49.709710  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.23663ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.712084  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.870359ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.712275  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I0912 16:52:49.713604  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.186709ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.715535  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.510212ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.715735  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I0912 16:52:49.716850  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (916.61µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.718825  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.431192ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.719722  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0912 16:52:49.720870  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (987.447µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.723232  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.84258ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.723428  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I0912 16:52:49.724628  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.051051ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.727150  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.79739ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.727348  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I0912 16:52:49.728660  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.158187ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.745763  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.702469ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.747452  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0912 16:52:49.764814  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.777645ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.779518  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.779549  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.779590  108901 httplog.go:90] GET /healthz: (1.28008ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:49.785682  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.755578ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.785993  108901 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0912 16:52:49.789091  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.789129  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.789185  108901 httplog.go:90] GET /healthz: (1.174177ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.804625  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.545856ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.825458  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.478138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.826005  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I0912 16:52:49.844347  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.363586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.865576  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.542824ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.865852  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I0912 16:52:49.879816  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.879862  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.879955  108901 httplog.go:90] GET /healthz: (1.488745ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:49.884488  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.51773ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.889260  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.889295  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.889334  108901 httplog.go:90] GET /healthz: (1.36389ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.905636  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.656397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.905914  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I0912 16:52:49.924390  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.38691ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.945300  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.310166ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.945561  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I0912 16:52:49.964566  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.52188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.979556  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.979592  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.979644  108901 httplog.go:90] GET /healthz: (1.256363ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:49.985482  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.455281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:49.985790  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I0912 16:52:49.989200  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:49.989249  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:49.989288  108901 httplog.go:90] GET /healthz: (1.29329ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.004424  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.444061ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.025891  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.853947ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.026154  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I0912 16:52:50.045087  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (2.031061ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.065432  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.463411ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.066104  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I0912 16:52:50.079819  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.079853  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.079894  108901 httplog.go:90] GET /healthz: (1.535472ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:50.084630  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.586667ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.089002  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.089045  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.089084  108901 httplog.go:90] GET /healthz: (1.188716ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.106397  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.981006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.106672  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I0912 16:52:50.124575  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.484167ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.145759  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.630445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.146069  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I0912 16:52:50.164914  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.42738ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.179596  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.179630  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.179673  108901 httplog.go:90] GET /healthz: (1.315775ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:50.185645  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.62507ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.185919  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0912 16:52:50.189062  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.189105  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.189158  108901 httplog.go:90] GET /healthz: (1.201852ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.204901  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.798023ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.225672  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.633306ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.225977  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0912 16:52:50.244656  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.534526ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.274595  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (11.511276ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.275078  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0912 16:52:50.280378  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.280414  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.280471  108901 httplog.go:90] GET /healthz: (1.382807ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:50.286108  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (3.179739ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.289026  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.289061  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.289112  108901 httplog.go:90] GET /healthz: (1.180368ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.306600  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.44287ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.306913  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0912 16:52:50.324804  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.694955ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.345555  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.575197ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.345838  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0912 16:52:50.364877  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.880125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.379544  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.379578  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.379618  108901 httplog.go:90] GET /healthz: (1.235876ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:50.386726  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.756914ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.387020  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0912 16:52:50.388916  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.388959  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.388995  108901 httplog.go:90] GET /healthz: (1.077519ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.404423  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.389398ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.425828  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.798635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.426120  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0912 16:52:50.444484  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.476416ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.465799  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.732321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.466769  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0912 16:52:50.479543  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.479580  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.479646  108901 httplog.go:90] GET /healthz: (1.283807ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:50.487279  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.940729ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.489510  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.489539  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.489584  108901 httplog.go:90] GET /healthz: (1.252065ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.505529  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.474889ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.506166  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0912 16:52:50.524553  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.540568ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.545836  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.733476ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.546149  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0912 16:52:50.564613  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.568776ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:50.584381  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.584413  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.584459  108901 httplog.go:90] GET /healthz: (3.076072ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:50.586850  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.311236ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.587138  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0912 16:52:50.588914  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.588968  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.589001  108901 httplog.go:90] GET /healthz: (1.086973ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.604623  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.498724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.625547  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.505518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.626001  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0912 16:52:50.644298  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.335422ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.665594  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.575832ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.665873  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0912 16:52:50.679403  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.679434  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.679472  108901 httplog.go:90] GET /healthz: (1.15395ms) 0 [Go-http-client/1.1 127.0.0.1:43402]
I0912 16:52:50.684236  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.309835ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.689087  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.689123  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.689191  108901 httplog.go:90] GET /healthz: (1.213868ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.705651  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.74527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.705905  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0912 16:52:50.724474  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.488642ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.746366  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.934533ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.746895  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0912 16:52:50.764362  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.389112ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.779555  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.779590  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.779643  108901 httplog.go:90] GET /healthz: (1.348853ms) 0 [Go-http-client/1.1 127.0.0.1:43402]
I0912 16:52:50.785325  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.401696ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.785573  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0912 16:52:50.788911  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.788954  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.788987  108901 httplog.go:90] GET /healthz: (1.061993ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.804246  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.30181ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.824990  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.046904ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.825329  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0912 16:52:50.844407  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (1.393242ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.865654  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.60923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.866187  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0912 16:52:50.879380  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.879413  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.879459  108901 httplog.go:90] GET /healthz: (1.163908ms) 0 [Go-http-client/1.1 127.0.0.1:43402]
I0912 16:52:50.884103  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.204168ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.888922  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.889020  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.889061  108901 httplog.go:90] GET /healthz: (1.098081ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.905420  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.367477ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.905673  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0912 16:52:50.924674  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.682505ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.945848  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.765834ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.946148  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0912 16:52:50.964396  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.402095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.979483  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.979514  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.979557  108901 httplog.go:90] GET /healthz: (1.219558ms) 0 [Go-http-client/1.1 127.0.0.1:43402]
I0912 16:52:50.989450  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.069754ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:50.989721  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0912 16:52:50.991160  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:50.991188  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:50.991247  108901 httplog.go:90] GET /healthz: (2.74942ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.004962  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.824798ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.025619  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.561691ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.026289  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0912 16:52:51.044480  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.472664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.065557  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.561997ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.065860  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0912 16:52:51.079743  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.079785  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.079836  108901 httplog.go:90] GET /healthz: (1.369022ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:51.084837  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.717164ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.092071  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.092108  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.092154  108901 httplog.go:90] GET /healthz: (4.057684ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.144392  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.384511ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.145019  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0912 16:52:51.147149  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.939785ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.150169  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.289284ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.150375  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0912 16:52:51.164399  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.466696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.180279  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.180324  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.180370  108901 httplog.go:90] GET /healthz: (2.037903ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:51.185210  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.207061ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.185701  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0912 16:52:51.189171  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.189198  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.189246  108901 httplog.go:90] GET /healthz: (1.279288ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.205376  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.224092ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.228770  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.040079ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.229354  108901 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0912 16:52:51.244566  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.625912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.246550  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.542274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.265507  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.408039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.266270  108901 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0912 16:52:51.279423  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.279463  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.279522  108901 httplog.go:90] GET /healthz: (1.2175ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:51.284361  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.36556ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.286451  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.608539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.289394  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.289418  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.289452  108901 httplog.go:90] GET /healthz: (1.514954ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.305118  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.060607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.305329  108901 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0912 16:52:51.324684  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.567736ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.328804  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.370926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.344984  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.052628ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.345685  108901 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0912 16:52:51.364856  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.686188ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.367596  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.59308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.392437  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.289208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.392630  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.392655  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.392683  108901 httplog.go:90] GET /healthz: (5.546515ms) 0 [Go-http-client/1.1 127.0.0.1:43402]
I0912 16:52:51.392752  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.392760  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.392783  108901 httplog.go:90] GET /healthz: (4.343759ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43454]
I0912 16:52:51.393618  108901 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0912 16:52:51.404672  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.284264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.406562  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.205346ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.425373  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.390627ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.425598  108901 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0912 16:52:51.444246  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.301702ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.446199  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.414177ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.465456  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.443157ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.466018  108901 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0912 16:52:51.479531  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.479568  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.479610  108901 httplog.go:90] GET /healthz: (1.306521ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:51.484178  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.268965ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.485884  108901 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.244879ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.488795  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.488830  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.488871  108901 httplog.go:90] GET /healthz: (964.054µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.505296  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.356263ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.505558  108901 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0912 16:52:51.524452  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.393094ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.526180  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.265252ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.546219  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.812912ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.546492  108901 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0912 16:52:51.564349  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.44506ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.566281  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.375431ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.579507  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.579540  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.579596  108901 httplog.go:90] GET /healthz: (1.251115ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:51.585513  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.448436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.585828  108901 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0912 16:52:51.588891  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.588916  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.588971  108901 httplog.go:90] GET /healthz: (1.050364ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.604316  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.336986ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.606398  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.312073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.624999  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.014757ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.625424  108901 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0912 16:52:51.644350  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.278653ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.646496  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.681829ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.665404  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.397987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.665660  108901 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0912 16:52:51.679684  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.679716  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.679773  108901 httplog.go:90] GET /healthz: (1.316355ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:51.684519  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.55728ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.686565  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.47172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.688762  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.688791  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.688833  108901 httplog.go:90] GET /healthz: (969.774µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.705275  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.253184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.705559  108901 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0912 16:52:51.724637  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.587329ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.726867  108901 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.516013ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.745251  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.173932ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.745771  108901 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0912 16:52:51.764628  108901 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.711938ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.766897  108901 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.61047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.779328  108901 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I0912 16:52:51.779360  108901 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I0912 16:52:51.779410  108901 httplog.go:90] GET /healthz: (1.097595ms) 0 [Go-http-client/1.1 127.0.0.1:43412]
I0912 16:52:51.785420  108901 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.421929ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.785677  108901 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0912 16:52:51.789189  108901 httplog.go:90] GET /healthz: (1.13367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.791051  108901 httplog.go:90] GET /api/v1/namespaces/default: (1.249599ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.793436  108901 httplog.go:90] POST /api/v1/namespaces: (1.79712ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.795103  108901 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.261706ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.799173  108901 httplog.go:90] POST /api/v1/namespaces/default/services: (3.604777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.800758  108901 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.207137ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.803454  108901 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.064796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.879499  108901 httplog.go:90] GET /healthz: (1.195236ms) 200 [Go-http-client/1.1 127.0.0.1:43412]
W0912 16:52:51.880301  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880352  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880363  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880410  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880424  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880435  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880448  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880476  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880485  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880494  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W0912 16:52:51.880536  108901 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I0912 16:52:51.880553  108901 factory.go:294] Creating scheduler from algorithm provider 'DefaultProvider'
I0912 16:52:51.880563  108901 factory.go:382] Creating scheduler with fit predicates 'map[CheckNodeCondition:{} CheckNodeDiskPressure:{} CheckNodeMemoryPressure:{} CheckNodePIDPressure:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I0912 16:52:51.880749  108901 shared_informer.go:197] Waiting for caches to sync for scheduler
I0912 16:52:51.881009  108901 reflector.go:120] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:230
I0912 16:52:51.881028  108901 reflector.go:158] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:230
I0912 16:52:51.882061  108901 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (731.731µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:52:51.882995  108901 get.go:250] Starting watch for /api/v1/pods, rv=30499 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=9m31s
I0912 16:52:51.980946  108901 shared_informer.go:227] caches populated
I0912 16:52:51.980978  108901 shared_informer.go:204] Caches are synced for scheduler 
I0912 16:52:51.981399  108901 reflector.go:120] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.981420  108901 reflector.go:158] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.981788  108901 reflector.go:120] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.981812  108901 reflector.go:158] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.981830  108901 reflector.go:120] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.981843  108901 reflector.go:158] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.982325  108901 reflector.go:120] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.982340  108901 reflector.go:158] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.982433  108901 reflector.go:120] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.982451  108901 reflector.go:158] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.982767  108901 reflector.go:120] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.982784  108901 reflector.go:158] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.982853  108901 reflector.go:120] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.982868  108901 reflector.go:158] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.983372  108901 reflector.go:120] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.983392  108901 reflector.go:158] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.983572  108901 reflector.go:120] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.983587  108901 reflector.go:158] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.983777  108901 reflector.go:120] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.983794  108901 reflector.go:158] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:134
I0912 16:52:51.985807  108901 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (574.918µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43474]
I0912 16:52:51.986421  108901 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (499.171µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43460]
I0912 16:52:51.987434  108901 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (596.827µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:51.989177  108901 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (491.768µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43402]
I0912 16:52:51.989670  108901 get.go:250] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=30499 labels= fields= timeout=5m30s
I0912 16:52:51.990785  108901 get.go:250] Starting watch for /api/v1/persistentvolumes, rv=30499 labels= fields= timeout=5m6s
I0912 16:52:51.991503  108901 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (432.366µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0912 16:52:51.991946  108901 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (330.906µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43462]
I0912 16:52:51.992393  108901 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (353.774µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43464]
I0912 16:52:51.993207  108901 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (701.463µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43468]
I0912 16:52:51.993686  108901 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (384.575µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I0912 16:52:51.994913  108901 get.go:250] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=30499 labels= fields= timeout=5m18s
I0912 16:52:51.995028  108901 get.go:250] Starting watch for /apis/apps/v1/statefulsets, rv=30499 labels= fields= timeout=6m37s
I0912 16:52:51.995345  108901 get.go:250] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=30499 labels= fields= timeout=5m12s
I0912 16:52:51.995466  108901 get.go:250] Starting watch for /api/v1/services, rv=30661 labels= fields= timeout=5m29s
I0912 16:52:51.995722  108901 get.go:250] Starting watch for /api/v1/nodes, rv=30499 labels= fields= timeout=6m30s
I0912 16:52:51.996163  108901 get.go:250] Starting watch for /api/v1/replicationcontrollers, rv=30499 labels= fields= timeout=7m27s
I0912 16:52:51.996220  108901 get.go:250] Starting watch for /apis/apps/v1/replicasets, rv=30499 labels= fields= timeout=6m34s
I0912 16:52:51.997068  108901 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (10.555695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0912 16:52:52.001444  108901 get.go:250] Starting watch for /api/v1/persistentvolumeclaims, rv=30499 labels= fields= timeout=8m48s
I0912 16:52:52.081308  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081343  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081350  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081357  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081372  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081378  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081384  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081390  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081395  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081405  108901 shared_informer.go:227] caches populated
I0912 16:52:52.081415  108901 shared_informer.go:227] caches populated
I0912 16:52:52.084726  108901 httplog.go:90] POST /api/v1/nodes: (2.84397ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.085041  108901 node_tree.go:93] Added node "testnode" in group "" to NodeTree
I0912 16:52:52.087368  108901 httplog.go:90] PUT /api/v1/nodes/testnode/status: (2.016982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.089838  108901 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods: (1.969813ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.090467  108901 scheduling_queue.go:830] About to try and schedule pod node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pidpressure-fake-name
I0912 16:52:52.090483  108901 scheduler.go:530] Attempting to schedule pod: node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pidpressure-fake-name
I0912 16:52:52.090647  108901 scheduler_binder.go:257] AssumePodVolumes for pod "node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pidpressure-fake-name", node "testnode"
I0912 16:52:52.090663  108901 scheduler_binder.go:267] AssumePodVolumes for pod "node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pidpressure-fake-name", node "testnode": all PVCs bound and nothing to do
I0912 16:52:52.090713  108901 factory.go:606] Attempting to bind pidpressure-fake-name to testnode
I0912 16:52:52.095484  108901 httplog.go:90] POST /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name/binding: (4.549631ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.097481  108901 scheduler.go:662] pod node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pidpressure-fake-name is bound successfully on node "testnode", 1 nodes evaluated, 1 nodes were found feasible. Bound node resource: "Capacity: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>; Allocatable: CPU<0>|Memory<0>|Pods<32>|StorageEphemeral<0>.".
I0912 16:52:52.100729  108901 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/events: (2.611891ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.192339  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.835943ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.293065  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.045914ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.392175  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.679479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.492347  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.802857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.592289  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.751926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.692387  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.911841ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.792349  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.765563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.892450  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.903579ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.991106  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:52.992289  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.769021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:52.994337  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:52.994859  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:52.994982  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:52.995164  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:52.997896  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:53.092343  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.797719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.192412  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.920896ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.292387  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.852354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.392324  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.790769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.492262  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.776533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.592563  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.055312ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.692503  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.933099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.792483  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.887818ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.892799  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.283719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.991211  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:53.992343  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.859624ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:53.994526  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:53.994997  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:53.995136  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:53.995313  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:53.998090  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:54.092992  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.326498ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.192990  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.006338ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.292160  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.625934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.392476  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.93092ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.492400  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.839626ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.592342  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.847771ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.693206  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.386353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.793219  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.501637ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.893489  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.772858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.991421  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:54.993618  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.990622ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:54.994730  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:54.995259  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:54.995264  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:54.995504  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:54.998408  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:55.092568  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.055275ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.192840  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.262099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.292574  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.03161ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.392437  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.908618ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.492595  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.985569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.592707  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.155402ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.693076  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.481634ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.792731  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.133116ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.892740  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.095843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.991537  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:55.992690  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.120333ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:55.994901  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:55.995373  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:55.995485  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:55.995586  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:55.998594  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:56.092576  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.028272ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.192746  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.101992ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.292405  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.814846ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.392355  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.817544ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.492285  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.755119ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.592293  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.719442ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.692422  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.881819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.796569  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (6.077922ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.915082  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (24.580176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.992042  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:56.992864  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.285921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:56.995122  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:56.995556  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:56.995653  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:56.995735  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:56.998778  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:57.092545  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.003189ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.192353  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.819048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.292364  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.857853ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.392488  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.965218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.492271  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.720276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.592447  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.895243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.692297  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.798711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.792617  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.06712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.892235  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.714775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.992443  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.86531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:57.992838  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:57.995303  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:57.995700  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:57.995829  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:57.996007  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:57.998966  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:58.092382  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.874721ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.192654  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.958617ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.292803  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.228391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.392515  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.965077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.493193  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.436934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.599478  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.941558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.692699  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.237147ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.792657  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.106451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.892466  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.945096ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.993761  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.202228ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:58.994102  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:58.995601  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:58.995836  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:58.996209  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:58.996417  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:58.999117  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:59.093731  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.987355ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.193971  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.170483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.298604  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (6.722751ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.394659  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (4.113283ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.493371  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.180564ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.592272  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.783769ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.692290  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.784952ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.792424  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.887464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.894217  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (3.486281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.994740  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:59.995589  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (4.256719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:52:59.995837  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:59.995968  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:59.996360  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:59.996586  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:52:59.999596  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:00.096746  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (6.209495ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.193884  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (3.310705ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.292320  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.840239ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.393230  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.721912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.493720  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.920243ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.592705  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.203569ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.692752  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.220057ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.792134  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.615308ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.892200  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.652393ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.992456  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.823602ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:00.994943  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:00.996100  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:00.996514  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:00.996717  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:00.999762  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:00.999950  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:01.126945  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (35.973387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.192402  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.872366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.323428  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (32.932543ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.392974  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.461735ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.494321  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (3.799636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.592618  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.109011ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.693365  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.887248ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.792709  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.773421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44238]
I0912 16:53:01.793465  108901 httplog.go:90] GET /api/v1/namespaces/default: (3.488582ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.795369  108901 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.475986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.797039  108901 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.200332ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.892464  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.859132ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.992772  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.211344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:01.995092  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:01.996265  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:01.996674  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:01.996870  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:01.999924  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:02.000180  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:02.092460  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.944736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.192193  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.68087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.292437  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.542784ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.392400  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.844631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.492326  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.785052ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.592334  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.842629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.693048  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.510945ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.792282  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.78843ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.892595  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.515944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.992327  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.844396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:02.995240  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:02.996457  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:02.996850  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:02.997008  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:03.000096  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:03.000300  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:03.092716  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.235738ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.192619  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.073012ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.292760  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.237417ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.393083  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.416801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.492026  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.481234ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.592676  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.087819ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.692197  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.712229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.792807  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.236173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.892464  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.875276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.992611  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.036772ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:03.995381  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:03.997018  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:03.997104  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:03.997335  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:04.000274  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:04.000492  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:04.092594  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.060546ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.195298  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (4.704104ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.292693  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.072775ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.400552  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (8.253631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.496011  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (5.488412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.592999  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.490802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.692105  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.564235ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.792513  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.025163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.894645  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (4.116836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.993973  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (3.379479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:04.995553  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:04.997189  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:04.997278  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:04.997460  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:05.000480  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:05.000668  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:05.092455  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.929274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.192905  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.337792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.292407  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.905037ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.392635  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.006786ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.492824  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.189648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.593490  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.961485ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.692338  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.741481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.793061  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.40457ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.892697  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.086752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.993330  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.709433ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:05.995813  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:05.997357  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:05.997506  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:05.997602  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:06.000662  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:06.000860  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:06.092840  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.254438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.192382  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.835932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.292809  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.165816ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.392583  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.064675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.492248  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.74257ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.592432  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.633791ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.693082  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.803481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.792354  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.765714ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.892391  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.892777ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.992980  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.497803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:06.995981  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:06.997559  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:06.997648  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:06.997771  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:07.000791  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:07.001148  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:07.092486  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.012267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.193157  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.518494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.292722  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.149785ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.392692  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.098917ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.492604  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.01511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.592089  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.605326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.692630  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.061205ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.792280  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.748348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.892403  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.916912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.993277  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.669286ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:07.996145  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:07.997754  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:07.997756  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:07.998196  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:08.000876  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:08.001280  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:08.092214  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.707172ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.192353  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.772628ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.292394  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.802009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.392479  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.894327ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.492764  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.116506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.592389  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.826874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.692510  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.021682ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.792645  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.978923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.892359  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.733716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.993253  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.095416ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:08.996321  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:08.997988  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:08.998110  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:08.998344  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:09.000996  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:09.001458  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:09.092904  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.240136ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.192462  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.932464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.295321  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.137165ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.392505  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.897196ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.492394  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.703313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.593035  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.177058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.691878  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.378505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.792381  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.82158ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.891988  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.513694ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.991994  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.494892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:09.996480  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:09.998676  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:09.998948  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:10.001061  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:10.001260  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:10.001882  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:10.092099  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.434962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.192419  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.866047ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.292622  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.558072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.392740  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.132782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.491908  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.315088ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.592240  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.626279ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.692337  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.78874ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.792439  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.9ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.892407  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.838724ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.992004  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.443049ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:10.996670  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:10.998848  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:10.999105  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:11.001263  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:11.001419  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:11.001990  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:11.093733  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.825215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.192422  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.898944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.292021  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.49789ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.392201  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.681563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.492274  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.725536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.592197  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.647297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.691960  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.43713ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.791791  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.344515ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44238]
I0912 16:53:11.792317  108901 httplog.go:90] GET /api/v1/namespaces/default: (2.016069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.793789  108901 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.110309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.795290  108901 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.016803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.891877  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.382304ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.991774  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.346471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:11.997478  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:11.999033  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:11.999264  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:12.001425  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:12.001565  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:12.002159  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:12.092173  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.595232ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.192442  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.544743ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.292357  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.878317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.392208  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.720629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.492161  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.673278ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.592282  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.732051ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.692410  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.965529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.792186  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.619048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.892498  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.968913ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.991749  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.31035ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:12.997633  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:12.999205  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:12.999842  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:13.001610  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:13.001744  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:13.002311  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:13.092176  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.672932ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.192346  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.76077ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.292401  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.872435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.395333  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (4.782101ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.492431  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.894276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.592408  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.879026ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.692447  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.788299ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.792221  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.745009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.895893  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (4.445757ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.992400  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.845218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:13.997840  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:13.999396  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:14.000055  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:14.001775  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:14.001915  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:14.002477  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:14.092111  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.655367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.192409  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.886558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.297867  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (6.745473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.392243  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.735212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.492406  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.810918ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.592538  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.976644ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.692373  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.862885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.792200  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.699021ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.892558  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.880446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.992628  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.069875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:14.998010  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:14.999567  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:15.000226  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:15.001959  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:15.002953  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:15.003039  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:15.092468  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.930313ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.192812  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.210072ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.292521  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.91561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.392142  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.607536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.492351  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.864121ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.591922  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.482458ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.695094  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.942372ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.792100  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.576507ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.891867  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.375663ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.992459  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.912082ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:15.998182  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:15.999767  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:16.000400  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:16.002113  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:16.003148  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:16.003919  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:16.092425  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.860674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.192404  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.817828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.292635  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.974317ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.392423  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.650909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.492641  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.120337ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.592921  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.106511ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.693318  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.660533ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.796418  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (3.96581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.892291  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.769929ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.992139  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.616145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:16.998388  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:16.999952  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:17.000599  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:17.002356  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:17.003355  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:17.004160  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:17.093278  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.566749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.194535  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (3.97ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.292336  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.781181ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.392454  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.917225ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.492322  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.751893ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.592623  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.028534ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.692375  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.804331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.792293  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.762155ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.892422  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.916115ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.992526  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.962547ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:17.998680  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:18.000258  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:18.000796  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:18.002526  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:18.003559  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:18.004351  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:18.092987  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.374925ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.192583  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.975315ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.292467  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.908323ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.392591  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.944804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.492969  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.410773ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.592844  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.21263ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.693253  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.599788ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.793303  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.605847ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.894701  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (4.126723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.993606  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.938353ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:18.998906  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:19.000475  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:19.001019  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:19.002706  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:19.003724  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:19.004575  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:19.093001  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.366983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.192659  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.105271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.292394  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.772324ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.392887  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.373999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.492189  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.658455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.592276  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.764048ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.692150  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.572778ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.792990  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.436807ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.892609  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.097408ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.992239  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.655294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:19.999134  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:20.000656  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:20.001184  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:20.002992  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:20.003976  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:20.004840  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:20.099799  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (9.276817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.192367  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.852767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.294436  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.995462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.391950  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.395677ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.492016  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.460145ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.595249  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.682357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.692207  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.541653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.792096  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.522839ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.892386  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.805477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.992392  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.819451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:20.999309  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:21.001480  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:21.001525  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:21.003166  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:21.004110  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:21.004999  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:21.092770  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.424261ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.192358  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.808908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.291962  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.434183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.393044  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (2.471249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.492170  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.643176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.592127  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.653615ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.692598  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.99259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.791666  108901 httplog.go:90] GET /api/v1/namespaces/default: (1.450627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.792456  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.982674ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:44238]
I0912 16:53:21.793655  108901 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.483426ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.796323  108901 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.10675ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.892572  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.724938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.992708  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.708474ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:21.999478  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:22.002359  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:22.002377  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:22.003422  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:22.004219  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:22.005140  108901 reflector.go:241] k8s.io/client-go/informers/factory.go:134: forcing resync
I0912 16:53:22.092257  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.705024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:22.097282  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (4.579834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:22.104488  108901 httplog.go:90] DELETE /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (6.48464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:22.107405  108901 httplog.go:90] GET /api/v1/namespaces/node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pods/pidpressure-fake-name: (1.30783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:22.108157  108901 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=30499&timeoutSeconds=571&watch=true: (30.225575299s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43412]
I0912 16:53:22.108219  108901 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=30499&timeout=5m18s&timeoutSeconds=318&watch=true: (30.11355429s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43462]
I0912 16:53:22.108346  108901 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=30499&timeout=7m27s&timeoutSeconds=447&watch=true: (30.112520717s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43460]
I0912 16:53:22.108401  108901 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=30499&timeout=6m34s&timeoutSeconds=394&watch=true: (30.112429459s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43474]
I0912 16:53:22.108490  108901 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=30499&timeout=8m48s&timeoutSeconds=528&watch=true: (30.110699743s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43458]
I0912 16:53:22.108527  108901 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=30499&timeout=6m30s&timeoutSeconds=390&watch=true: (30.113077897s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43470]
I0912 16:53:22.108747  108901 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=30661&timeout=5m29s&timeoutSeconds=329&watch=true: (30.113571959s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43464]
I0912 16:53:22.108837  108901 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=30499&timeout=5m12s&timeoutSeconds=312&watch=true: (30.113726297s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43478]
I0912 16:53:22.108874  108901 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=30499&timeout=5m30s&timeoutSeconds=330&watch=true: (30.119458712s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43480]
I0912 16:53:22.108886  108901 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=30499&timeout=5m6s&timeoutSeconds=306&watch=true: (30.118452314s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43476]
E0912 16:53:22.108973  108901 scheduling_queue.go:833] Error while retrieving next pod from scheduling queue: scheduling queue is closed
I0912 16:53:22.109663  108901 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=30499&timeout=6m37s&timeoutSeconds=397&watch=true: (30.114979531s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43472]
I0912 16:53:22.115823  108901 httplog.go:90] DELETE /api/v1/nodes: (7.587959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:22.116131  108901 controller.go:182] Shutting down kubernetes service endpoint reconciler
I0912 16:53:22.118406  108901 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.879884ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
I0912 16:53:22.120471  108901 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.608667ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:43466]
--- FAIL: TestNodePIDPressure (33.90s)
    predicates_test.go:924: Test Failed: error, timed out waiting for the condition, while waiting for scheduled

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20190912-164515.xml

Find node-pid-pressure26609873-fa6e-4a50-b377-717f09af80d8/pidpressure-fake-name mentions in log files


Show 2862 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 885 lines ...
W0912 16:40:18.868] I0912 16:40:18.866410   52998 shared_informer.go:197] Waiting for caches to sync for service account
W0912 16:40:18.869] I0912 16:40:18.866994   52998 controllermanager.go:534] Started "daemonset"
W0912 16:40:18.869] I0912 16:40:18.867036   52998 daemon_controller.go:267] Starting daemon sets controller
W0912 16:40:18.869] I0912 16:40:18.867065   52998 shared_informer.go:197] Waiting for caches to sync for daemon sets
W0912 16:40:18.869] I0912 16:40:18.867489   52998 controllermanager.go:534] Started "csrcleaner"
W0912 16:40:18.869] I0912 16:40:18.867516   52998 cleaner.go:81] Starting CSR cleaner controller
W0912 16:40:18.870] E0912 16:40:18.868441   52998 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0912 16:40:18.870] W0912 16:40:18.868505   52998 controllermanager.go:526] Skipping "service"
W0912 16:40:18.870] I0912 16:40:18.868537   52998 core.go:211] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0912 16:40:18.870] W0912 16:40:18.868560   52998 controllermanager.go:526] Skipping "route"
W0912 16:40:18.870] I0912 16:40:18.869227   52998 controllermanager.go:534] Started "clusterrole-aggregation"
W0912 16:40:18.871] I0912 16:40:18.869257   52998 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0912 16:40:18.871] I0912 16:40:18.869477   52998 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator
... skipping 14 lines ...
W0912 16:40:18.886] I0912 16:40:18.885734   52998 cronjob_controller.go:96] Starting CronJob Manager
W0912 16:40:18.886] I0912 16:40:18.886166   52998 controllermanager.go:534] Started "ttl"
W0912 16:40:18.886] W0912 16:40:18.886325   52998 controllermanager.go:526] Skipping "nodeipam"
W0912 16:40:18.886] I0912 16:40:18.886226   52998 ttl_controller.go:116] Starting TTL controller
W0912 16:40:18.887] I0912 16:40:18.886578   52998 shared_informer.go:197] Waiting for caches to sync for TTL
W0912 16:40:18.887] I0912 16:40:18.887036   52998 node_lifecycle_controller.go:77] Sending events to api server
W0912 16:40:18.887] E0912 16:40:18.887266   52998 core.go:201] failed to start cloud node lifecycle controller: no cloud provider provided
W0912 16:40:18.887] W0912 16:40:18.887408   52998 controllermanager.go:526] Skipping "cloud-node-lifecycle"
W0912 16:40:18.888] W0912 16:40:18.888179   52998 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0912 16:40:18.889] I0912 16:40:18.889199   52998 controllermanager.go:534] Started "attachdetach"
W0912 16:40:18.889] I0912 16:40:18.889278   52998 attach_detach_controller.go:334] Starting attach detach controller
W0912 16:40:18.890] I0912 16:40:18.889457   52998 shared_informer.go:197] Waiting for caches to sync for attach detach
W0912 16:40:18.890] I0912 16:40:18.890115   52998 controllermanager.go:534] Started "persistentvolume-expander"
... skipping 31 lines ...
W0912 16:40:19.601] I0912 16:40:19.304380   52998 replica_set.go:182] Starting replicaset controller
W0912 16:40:19.601] I0912 16:40:19.304398   52998 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
W0912 16:40:19.601] I0912 16:40:19.304474   52998 disruption.go:333] Starting disruption controller
W0912 16:40:19.601] I0912 16:40:19.304491   52998 shared_informer.go:197] Waiting for caches to sync for disruption
W0912 16:40:19.601] I0912 16:40:19.304561   52998 certificate_controller.go:118] Starting certificate controller "csrapproving"
W0912 16:40:19.602] I0912 16:40:19.304615   52998 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving
W0912 16:40:19.602] W0912 16:40:19.332503   52998 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0912 16:40:19.602] I0912 16:40:19.369678   52998 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
W0912 16:40:19.602] I0912 16:40:19.381601   52998 shared_informer.go:204] Caches are synced for namespace 
W0912 16:40:19.602] I0912 16:40:19.384896   52998 shared_informer.go:204] Caches are synced for PV protection 
W0912 16:40:19.602] E0912 16:40:19.385726   52998 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0912 16:40:19.603] I0912 16:40:19.386752   52998 shared_informer.go:204] Caches are synced for TTL 
W0912 16:40:19.603] E0912 16:40:19.388565   52998 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0912 16:40:19.603] E0912 16:40:19.394142   52998 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0912 16:40:19.603] I0912 16:40:19.395029   52998 shared_informer.go:204] Caches are synced for expand 
I0912 16:40:19.704] Successful: the flag '--client' shows correct client info
I0912 16:40:19.704] (BSuccessful: the flag '--client' correctly has no server version info
I0912 16:40:19.704] (B+++ [0912 16:40:19] Testing kubectl version: verify json output
I0912 16:40:19.804] Successful: --output json has correct client info
I0912 16:40:19.809] (BSuccessful: --output json has correct server info
... skipping 79 lines ...
I0912 16:40:22.814] +++ working dir: /go/src/k8s.io/kubernetes
I0912 16:40:22.816] +++ command: run_RESTMapper_evaluation_tests
I0912 16:40:22.826] +++ [0912 16:40:22] Creating namespace namespace-1568306422-14991
I0912 16:40:22.897] namespace/namespace-1568306422-14991 created
I0912 16:40:22.973] Context "test" modified.
I0912 16:40:22.978] +++ [0912 16:40:22] Testing RESTMapper
I0912 16:40:23.074] +++ [0912 16:40:23] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0912 16:40:23.088] +++ exit code: 0
I0912 16:40:23.198] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0912 16:40:23.199] bindings                                                                      true         Binding
I0912 16:40:23.200] componentstatuses                 cs                                          false        ComponentStatus
I0912 16:40:23.200] configmaps                        cm                                          true         ConfigMap
I0912 16:40:23.200] endpoints                         ep                                          true         Endpoints
... skipping 616 lines ...
I0912 16:40:41.967] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0912 16:40:42.059] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0912 16:40:42.133] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0912 16:40:42.229] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0912 16:40:42.394] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:40:42.598] (Bpod/env-test-pod created
W0912 16:40:42.699] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0912 16:40:42.699] error: setting 'all' parameter but found a non empty selector. 
W0912 16:40:42.699] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0912 16:40:42.700] I0912 16:40:41.628294   49456 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0912 16:40:42.700] error: min-available and max-unavailable cannot be both specified
I0912 16:40:42.801] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0912 16:40:42.801] Name:         env-test-pod
I0912 16:40:42.802] Namespace:    test-kubectl-describe-pod
I0912 16:40:42.802] Priority:     0
I0912 16:40:42.802] Node:         <none>
I0912 16:40:42.802] Labels:       <none>
... skipping 174 lines ...
I0912 16:40:56.307] (Bpod/valid-pod patched
I0912 16:40:56.417] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0912 16:40:56.505] (Bpod/valid-pod patched
I0912 16:40:56.622] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0912 16:40:56.799] (Bpod/valid-pod patched
I0912 16:40:56.887] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0912 16:40:57.054] (B+++ [0912 16:40:57] "kubectl patch with resourceVersion 498" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0912 16:40:57.287] pod "valid-pod" deleted
I0912 16:40:57.297] pod/valid-pod replaced
I0912 16:40:57.390] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0912 16:40:57.534] (BSuccessful
I0912 16:40:57.535] message:error: --grace-period must have --force specified
I0912 16:40:57.535] has:\-\-grace-period must have \-\-force specified
I0912 16:40:57.674] Successful
I0912 16:40:57.675] message:error: --timeout must have --force specified
I0912 16:40:57.676] has:\-\-timeout must have \-\-force specified
I0912 16:40:57.836] node/node-v1-test created
W0912 16:40:57.937] W0912 16:40:57.835851   52998 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0912 16:40:58.038] node/node-v1-test replaced
I0912 16:40:58.143] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0912 16:40:58.233] (Bnode "node-v1-test" deleted
I0912 16:40:58.335] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0912 16:40:58.620] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0912 16:40:59.646] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 66 lines ...
I0912 16:41:03.476] +++ [0912 16:41:03] Testing kubectl --save-config
I0912 16:41:03.481] +++ [0912 16:41:03] Creating namespace namespace-1568306463-28741
I0912 16:41:03.551] namespace/namespace-1568306463-28741 created
I0912 16:41:03.622] Context "test" modified.
I0912 16:41:03.712] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:03.872] (Bpod/test-pod created
W0912 16:41:03.973] error: 'name' already has a value (valid-pod), and --overwrite is false
W0912 16:41:03.973] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0912 16:41:03.973] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0912 16:41:04.074] pod "test-pod" deleted
I0912 16:41:04.074] +++ [0912 16:41:04] Creating namespace namespace-1568306464-31572
I0912 16:41:04.127] namespace/namespace-1568306464-31572 created
I0912 16:41:04.196] Context "test" modified.
... skipping 41 lines ...
I0912 16:41:07.378] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0912 16:41:07.380] +++ working dir: /go/src/k8s.io/kubernetes
I0912 16:41:07.382] +++ command: run_kubectl_create_error_tests
I0912 16:41:07.392] +++ [0912 16:41:07] Creating namespace namespace-1568306467-15688
I0912 16:41:07.466] namespace/namespace-1568306467-15688 created
I0912 16:41:07.533] Context "test" modified.
I0912 16:41:07.540] +++ [0912 16:41:07] Testing kubectl create with error
W0912 16:41:07.640] Error: must specify one of -f and -k
W0912 16:41:07.641] 
W0912 16:41:07.641] Create a resource from a file or from stdin.
W0912 16:41:07.641] 
W0912 16:41:07.641]  JSON and YAML formats are accepted.
W0912 16:41:07.641] 
W0912 16:41:07.641] Examples:
... skipping 41 lines ...
W0912 16:41:07.649] 
W0912 16:41:07.649] Usage:
W0912 16:41:07.649]   kubectl create -f FILENAME [options]
W0912 16:41:07.650] 
W0912 16:41:07.650] Use "kubectl <command> --help" for more information about a given command.
W0912 16:41:07.650] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0912 16:41:07.764] +++ [0912 16:41:07] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0912 16:41:07.865] kubectl convert is DEPRECATED and will be removed in a future version.
W0912 16:41:07.865] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0912 16:41:07.966] +++ exit code: 0
I0912 16:41:07.972] Recording: run_kubectl_apply_tests
I0912 16:41:07.972] Running command: run_kubectl_apply_tests
I0912 16:41:07.997] 
... skipping 16 lines ...
I0912 16:41:09.690] apply.sh:276: Successful get pods test-pod {{.metadata.labels.name}}: test-pod-label
I0912 16:41:09.767] (Bpod "test-pod" deleted
I0912 16:41:10.020] customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
W0912 16:41:10.343] I0912 16:41:10.343032   49456 client.go:361] parsed scheme: "endpoint"
W0912 16:41:10.344] I0912 16:41:10.343070   49456 endpoint.go:66] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W0912 16:41:10.347] I0912 16:41:10.347145   49456 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0912 16:41:10.446] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0912 16:41:10.547] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0912 16:41:10.552] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0912 16:41:10.590] +++ exit code: 0
I0912 16:41:10.629] Recording: run_kubectl_run_tests
I0912 16:41:10.630] Running command: run_kubectl_run_tests
I0912 16:41:10.654] 
... skipping 97 lines ...
I0912 16:41:13.153] Context "test" modified.
I0912 16:41:13.159] +++ [0912 16:41:13] Testing kubectl create filter
I0912 16:41:13.244] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:13.430] (Bpod/selector-test-pod created
I0912 16:41:13.519] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0912 16:41:13.615] (BSuccessful
I0912 16:41:13.616] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0912 16:41:13.616] has:pods "selector-test-pod-dont-apply" not found
I0912 16:41:13.715] pod "selector-test-pod" deleted
I0912 16:41:13.733] +++ exit code: 0
I0912 16:41:13.765] Recording: run_kubectl_apply_deployments_tests
I0912 16:41:13.766] Running command: run_kubectl_apply_deployments_tests
I0912 16:41:13.789] 
... skipping 18 lines ...
I0912 16:41:15.141] (Bapps.sh:132: Successful get deployments my-depl {{.metadata.labels.l1}}: <no value>
I0912 16:41:15.228] (Bdeployment.apps "my-depl" deleted
I0912 16:41:15.238] replicaset.apps "my-depl-64b97f7d4d" deleted
I0912 16:41:15.246] pod "my-depl-64b97f7d4d-xsfwp" deleted
W0912 16:41:15.347] I0912 16:41:14.400187   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306473-24893", Name:"my-depl", UID:"b566f34d-56b0-496d-b134-386f4c9a24a6", APIVersion:"apps/v1", ResourceVersion:"552", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-64b97f7d4d to 1
W0912 16:41:15.348] I0912 16:41:14.403870   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"my-depl-64b97f7d4d", UID:"d300e656-0370-4da6-8c51-361f57e06e4f", APIVersion:"apps/v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-64b97f7d4d-xsfwp
W0912 16:41:15.348] E0912 16:41:15.254241   52998 replica_set.go:450] Sync "namespace-1568306473-24893/my-depl-64b97f7d4d" failed with replicasets.apps "my-depl-64b97f7d4d" not found
I0912 16:41:15.448] apps.sh:138: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:15.467] (Bapps.sh:139: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:15.573] (Bapps.sh:140: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:15.672] (Bapps.sh:144: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:15.845] (Bdeployment.apps/nginx created
W0912 16:41:15.946] I0912 16:41:15.855022   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306473-24893", Name:"nginx", UID:"967dae6d-fc35-40e3-b7c6-f9e9db4b4410", APIVersion:"apps/v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
W0912 16:41:15.946] I0912 16:41:15.863170   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-8484dd655", UID:"d0ed6bbf-29cd-4748-babd-52f3bd12c274", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-zrskr
W0912 16:41:15.947] I0912 16:41:15.867128   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-8484dd655", UID:"d0ed6bbf-29cd-4748-babd-52f3bd12c274", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-lbmwk
W0912 16:41:15.947] I0912 16:41:15.869338   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-8484dd655", UID:"d0ed6bbf-29cd-4748-babd-52f3bd12c274", APIVersion:"apps/v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-v4ls4
I0912 16:41:16.047] apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
I0912 16:41:20.166] (BSuccessful
I0912 16:41:20.167] message:Error from server (Conflict): error when applying patch:
I0912 16:41:20.168] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568306473-24893\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0912 16:41:20.168] to:
I0912 16:41:20.168] Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
I0912 16:41:20.168] Name: "nginx", Namespace: "namespace-1568306473-24893"
I0912 16:41:20.171] Object: &{map["apiVersion":"apps/v1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1568306473-24893\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx1\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-09-12T16:41:15Z" "generation":'\x01' "labels":map["name":"nginx"] "name":"nginx" "namespace":"namespace-1568306473-24893" "resourceVersion":"589" "selfLink":"/apis/apps/v1/namespaces/namespace-1568306473-24893/deployments/nginx" "uid":"967dae6d-fc35-40e3-b7c6-f9e9db4b4410"] "spec":map["progressDeadlineSeconds":'\u0258' "replicas":'\x03' "revisionHistoryLimit":'\n' "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":"25%" "maxUnavailable":"25%"] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-09-12T16:41:15Z" "lastUpdateTime":"2019-09-12T16:41:15Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"] map["lastTransitionTime":"2019-09-12T16:41:15Z" "lastUpdateTime":"2019-09-12T16:41:15Z" "message":"ReplicaSet \"nginx-8484dd655\" is progressing." "reason":"ReplicaSetUpdated" "status":"True" "type":"Progressing"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0912 16:41:20.171] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
I0912 16:41:20.171] has:Error from server (Conflict)
W0912 16:41:21.811] I0912 16:41:21.810583   52998 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1568306464-22872
I0912 16:41:25.417] deployment.apps/nginx configured
I0912 16:41:25.505] Successful
I0912 16:41:25.505] message:        "name": "nginx2"
I0912 16:41:25.505]           "name": "nginx2"
I0912 16:41:25.505] has:"name": "nginx2"
W0912 16:41:25.606] I0912 16:41:25.422391   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306473-24893", Name:"nginx", UID:"86add5ff-d5bf-46b4-9a22-48ba6079363e", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W0912 16:41:25.606] I0912 16:41:25.425765   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-668b6c7744", UID:"f1eb2b9e-ab6f-47fc-abef-4fac9850f590", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-vptx8
W0912 16:41:25.607] I0912 16:41:25.429026   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-668b6c7744", UID:"f1eb2b9e-ab6f-47fc-abef-4fac9850f590", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-zxc4r
W0912 16:41:25.607] I0912 16:41:25.429788   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-668b6c7744", UID:"f1eb2b9e-ab6f-47fc-abef-4fac9850f590", APIVersion:"apps/v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-btgn5
W0912 16:41:29.737] E0912 16:41:29.736778   52998 replica_set.go:450] Sync "namespace-1568306473-24893/nginx-668b6c7744" failed with Operation cannot be fulfilled on replicasets.apps "nginx-668b6c7744": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1568306473-24893/nginx-668b6c7744, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f1eb2b9e-ab6f-47fc-abef-4fac9850f590, UID in object meta: 
W0912 16:41:30.714] I0912 16:41:30.713683   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306473-24893", Name:"nginx", UID:"cab0dc2b-d82c-4bc9-933c-799d5d527a00", APIVersion:"apps/v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
W0912 16:41:30.720] I0912 16:41:30.720148   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-668b6c7744", UID:"68bb7c27-4ea2-4d77-ad40-e54605ab18c0", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-hg2sm
W0912 16:41:30.724] I0912 16:41:30.724243   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-668b6c7744", UID:"68bb7c27-4ea2-4d77-ad40-e54605ab18c0", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-lpflw
W0912 16:41:30.726] I0912 16:41:30.725807   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306473-24893", Name:"nginx-668b6c7744", UID:"68bb7c27-4ea2-4d77-ad40-e54605ab18c0", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-p9pbk
I0912 16:41:30.826] Successful
I0912 16:41:30.827] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 132 lines ...
I0912 16:41:32.532] +++ [0912 16:41:32] Creating namespace namespace-1568306492-627
I0912 16:41:32.599] namespace/namespace-1568306492-627 created
I0912 16:41:32.665] Context "test" modified.
I0912 16:41:32.671] +++ [0912 16:41:32] Testing kubectl get
I0912 16:41:32.753] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:32.830] (BSuccessful
I0912 16:41:32.831] message:Error from server (NotFound): pods "abc" not found
I0912 16:41:32.831] has:pods "abc" not found
I0912 16:41:32.914] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:32.996] (BSuccessful
I0912 16:41:32.996] message:Error from server (NotFound): pods "abc" not found
I0912 16:41:32.996] has:pods "abc" not found
I0912 16:41:33.086] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:33.166] (BSuccessful
I0912 16:41:33.166] message:{
I0912 16:41:33.166]     "apiVersion": "v1",
I0912 16:41:33.167]     "items": [],
... skipping 23 lines ...
I0912 16:41:33.500] has not:No resources found
I0912 16:41:33.575] Successful
I0912 16:41:33.576] message:NAME
I0912 16:41:33.576] has not:No resources found
I0912 16:41:33.658] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:33.751] (BSuccessful
I0912 16:41:33.751] message:error: the server doesn't have a resource type "foobar"
I0912 16:41:33.751] has not:No resources found
I0912 16:41:33.828] Successful
I0912 16:41:33.829] message:No resources found in namespace-1568306492-627 namespace.
I0912 16:41:33.829] has:No resources found
I0912 16:41:33.904] Successful
I0912 16:41:33.904] message:
I0912 16:41:33.905] has not:No resources found
I0912 16:41:33.982] Successful
I0912 16:41:33.983] message:No resources found in namespace-1568306492-627 namespace.
I0912 16:41:33.983] has:No resources found
I0912 16:41:34.065] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:34.154] (BSuccessful
I0912 16:41:34.154] message:Error from server (NotFound): pods "abc" not found
I0912 16:41:34.154] has:pods "abc" not found
I0912 16:41:34.155] FAIL!
I0912 16:41:34.156] message:Error from server (NotFound): pods "abc" not found
I0912 16:41:34.156] has not:List
I0912 16:41:34.156] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0912 16:41:34.259] Successful
I0912 16:41:34.259] message:I0912 16:41:34.216966   62950 loader.go:375] Config loaded from file:  /tmp/tmp.cpkGYM70Ly/.kube/config
I0912 16:41:34.259] I0912 16:41:34.218469   62950 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0912 16:41:34.260] I0912 16:41:34.236730   62950 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
... skipping 660 lines ...
I0912 16:41:39.845] Successful
I0912 16:41:39.845] message:NAME    DATA   AGE
I0912 16:41:39.845] one     0      0s
I0912 16:41:39.845] three   0      0s
I0912 16:41:39.845] two     0      0s
I0912 16:41:39.846] STATUS    REASON          MESSAGE
I0912 16:41:39.846] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 16:41:39.846] has not:watch is only supported on individual resources
I0912 16:41:40.951] Successful
I0912 16:41:40.951] message:STATUS    REASON          MESSAGE
I0912 16:41:40.951] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 16:41:40.951] has not:watch is only supported on individual resources
I0912 16:41:40.955] +++ [0912 16:41:40] Creating namespace namespace-1568306500-21343
I0912 16:41:41.022] namespace/namespace-1568306500-21343 created
I0912 16:41:41.087] Context "test" modified.
I0912 16:41:41.170] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:41.313] (Bpod/valid-pod created
... skipping 56 lines ...
I0912 16:41:41.398] }
I0912 16:41:41.471] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0912 16:41:41.680] (B<no value>Successful
I0912 16:41:41.680] message:valid-pod:
I0912 16:41:41.681] has:valid-pod:
I0912 16:41:41.753] Successful
I0912 16:41:41.753] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0912 16:41:41.753] 	template was:
I0912 16:41:41.754] 		{.missing}
I0912 16:41:41.754] 	object given to jsonpath engine was:
I0912 16:41:41.755] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-09-12T16:41:41Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1568306500-21343", "resourceVersion":"693", "selfLink":"/api/v1/namespaces/namespace-1568306500-21343/pods/valid-pod", "uid":"ccd79968-8f2e-43a2-abe5-4f896d0e6fd8"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0912 16:41:41.755] has:missing is not found
I0912 16:41:41.825] Successful
I0912 16:41:41.825] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0912 16:41:41.825] 	template was:
I0912 16:41:41.825] 		{{.missing}}
I0912 16:41:41.826] 	raw data was:
I0912 16:41:41.826] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-09-12T16:41:41Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1568306500-21343","resourceVersion":"693","selfLink":"/api/v1/namespaces/namespace-1568306500-21343/pods/valid-pod","uid":"ccd79968-8f2e-43a2-abe5-4f896d0e6fd8"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0912 16:41:41.826] 	object given to template engine was:
I0912 16:41:41.827] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-09-12T16:41:41Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1568306500-21343 resourceVersion:693 selfLink:/api/v1/namespaces/namespace-1568306500-21343/pods/valid-pod uid:ccd79968-8f2e-43a2-abe5-4f896d0e6fd8] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0912 16:41:41.827] has:map has no entry for key "missing"
W0912 16:41:41.928] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0912 16:41:42.900] Successful
I0912 16:41:42.901] message:NAME        READY   STATUS    RESTARTS   AGE
I0912 16:41:42.901] valid-pod   0/1     Pending   0          0s
I0912 16:41:42.901] STATUS      REASON          MESSAGE
I0912 16:41:42.902] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 16:41:42.902] has:STATUS
I0912 16:41:42.902] Successful
I0912 16:41:42.903] message:NAME        READY   STATUS    RESTARTS   AGE
I0912 16:41:42.903] valid-pod   0/1     Pending   0          0s
I0912 16:41:42.903] STATUS      REASON          MESSAGE
I0912 16:41:42.903] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 16:41:42.904] has:valid-pod
I0912 16:41:43.978] Successful
I0912 16:41:43.978] message:pod/valid-pod
I0912 16:41:43.978] has not:STATUS
I0912 16:41:43.980] Successful
I0912 16:41:43.980] message:pod/valid-pod
... skipping 72 lines ...
I0912 16:41:45.062] status:
I0912 16:41:45.062]   phase: Pending
I0912 16:41:45.062]   qosClass: Guaranteed
I0912 16:41:45.062] ---
I0912 16:41:45.062] has:name: valid-pod
I0912 16:41:45.133] Successful
I0912 16:41:45.133] message:Error from server (NotFound): pods "invalid-pod" not found
I0912 16:41:45.133] has:"invalid-pod" not found
I0912 16:41:45.205] pod "valid-pod" deleted
I0912 16:41:45.284] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:41:45.425] (Bpod/redis-master created
I0912 16:41:45.428] pod/valid-pod created
I0912 16:41:45.509] Successful
... skipping 31 lines ...
I0912 16:41:46.490] +++ command: run_kubectl_exec_pod_tests
I0912 16:41:46.500] +++ [0912 16:41:46] Creating namespace namespace-1568306506-20713
I0912 16:41:46.570] namespace/namespace-1568306506-20713 created
I0912 16:41:46.637] Context "test" modified.
I0912 16:41:46.643] +++ [0912 16:41:46] Testing kubectl exec POD COMMAND
I0912 16:41:46.720] Successful
I0912 16:41:46.721] message:Error from server (NotFound): pods "abc" not found
I0912 16:41:46.721] has:pods "abc" not found
W0912 16:41:46.822] I0912 16:41:45.974843   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306500-21343", Name:"test-the-deployment", UID:"8d0774eb-098e-435f-a2b4-47aa182786e7", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-the-deployment-69fdbb5f7d to 3
W0912 16:41:46.822] I0912 16:41:45.978260   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306500-21343", Name:"test-the-deployment-69fdbb5f7d", UID:"c8b4d7b4-0dfa-4815-8a34-f0f6594b573d", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-npqwg
W0912 16:41:46.823] I0912 16:41:45.981226   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306500-21343", Name:"test-the-deployment-69fdbb5f7d", UID:"c8b4d7b4-0dfa-4815-8a34-f0f6594b573d", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-z7xgs
W0912 16:41:46.823] I0912 16:41:45.981290   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306500-21343", Name:"test-the-deployment-69fdbb5f7d", UID:"c8b4d7b4-0dfa-4815-8a34-f0f6594b573d", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-the-deployment-69fdbb5f7d-pn8hh
I0912 16:41:46.923] pod/test-pod created
I0912 16:41:46.950] Successful
I0912 16:41:46.950] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0912 16:41:46.950] has not:pods "test-pod" not found
I0912 16:41:46.952] Successful
I0912 16:41:46.953] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0912 16:41:46.953] has not:pod or type/name must be specified
I0912 16:41:47.028] pod "test-pod" deleted
I0912 16:41:47.045] +++ exit code: 0
I0912 16:41:47.074] Recording: run_kubectl_exec_resource_name_tests
I0912 16:41:47.074] Running command: run_kubectl_exec_resource_name_tests
I0912 16:41:47.094] 
... skipping 2 lines ...
I0912 16:41:47.100] +++ command: run_kubectl_exec_resource_name_tests
I0912 16:41:47.110] +++ [0912 16:41:47] Creating namespace namespace-1568306507-10975
I0912 16:41:47.178] namespace/namespace-1568306507-10975 created
I0912 16:41:47.241] Context "test" modified.
I0912 16:41:47.247] +++ [0912 16:41:47] Testing kubectl exec TYPE/NAME COMMAND
I0912 16:41:47.336] Successful
I0912 16:41:47.336] message:error: the server doesn't have a resource type "foo"
I0912 16:41:47.336] has:error:
I0912 16:41:47.411] Successful
I0912 16:41:47.411] message:Error from server (NotFound): deployments.apps "bar" not found
I0912 16:41:47.411] has:"bar" not found
I0912 16:41:47.551] pod/test-pod created
I0912 16:41:47.690] replicaset.apps/frontend created
W0912 16:41:47.791] I0912 16:41:47.693277   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306507-10975", Name:"frontend", UID:"5692f721-8ade-4f4a-bb39-083898e429f9", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kbjh4
W0912 16:41:47.791] I0912 16:41:47.695986   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306507-10975", Name:"frontend", UID:"5692f721-8ade-4f4a-bb39-083898e429f9", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-shzjt
W0912 16:41:47.792] I0912 16:41:47.696085   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306507-10975", Name:"frontend", UID:"5692f721-8ade-4f4a-bb39-083898e429f9", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gfgzt
I0912 16:41:47.892] configmap/test-set-env-config created
I0912 16:41:47.920] Successful
I0912 16:41:47.920] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0912 16:41:47.920] has:not implemented
I0912 16:41:48.012] Successful
I0912 16:41:48.013] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0912 16:41:48.013] has not:not found
I0912 16:41:48.015] Successful
I0912 16:41:48.015] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0912 16:41:48.015] has not:pod or type/name must be specified
I0912 16:41:48.123] Successful
I0912 16:41:48.123] message:Error from server (BadRequest): pod frontend-gfgzt does not have a host assigned
I0912 16:41:48.123] has not:not found
I0912 16:41:48.125] Successful
I0912 16:41:48.125] message:Error from server (BadRequest): pod frontend-gfgzt does not have a host assigned
I0912 16:41:48.125] has not:pod or type/name must be specified
I0912 16:41:48.199] pod "test-pod" deleted
I0912 16:41:48.281] replicaset.apps "frontend" deleted
I0912 16:41:48.366] configmap "test-set-env-config" deleted
I0912 16:41:48.383] +++ exit code: 0
I0912 16:41:48.412] Recording: run_create_secret_tests
I0912 16:41:48.412] Running command: run_create_secret_tests
I0912 16:41:48.431] 
I0912 16:41:48.433] +++ Running case: test-cmd.run_create_secret_tests 
I0912 16:41:48.436] +++ working dir: /go/src/k8s.io/kubernetes
I0912 16:41:48.437] +++ command: run_create_secret_tests
I0912 16:41:48.527] Successful
I0912 16:41:48.527] message:Error from server (NotFound): secrets "mysecret" not found
I0912 16:41:48.527] has:secrets "mysecret" not found
I0912 16:41:48.686] Successful
I0912 16:41:48.686] message:Error from server (NotFound): secrets "mysecret" not found
I0912 16:41:48.686] has:secrets "mysecret" not found
I0912 16:41:48.688] Successful
I0912 16:41:48.688] message:user-specified
I0912 16:41:48.688] has:user-specified
I0912 16:41:48.763] Successful
I0912 16:41:48.848] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"2776f65f-f7d2-4e06-9943-5ec75da530c8","resourceVersion":"767","creationTimestamp":"2019-09-12T16:41:48Z"}}
... skipping 2 lines ...
I0912 16:41:49.028] has:uid
I0912 16:41:49.102] Successful
I0912 16:41:49.102] message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"2776f65f-f7d2-4e06-9943-5ec75da530c8","resourceVersion":"768","creationTimestamp":"2019-09-12T16:41:48Z"},"data":{"key1":"config1"}}
I0912 16:41:49.103] has:config1
I0912 16:41:49.172] {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"2776f65f-f7d2-4e06-9943-5ec75da530c8"}}
I0912 16:41:49.269] Successful
I0912 16:41:49.269] message:Error from server (NotFound): configmaps "tester-update-cm" not found
I0912 16:41:49.269] has:configmaps "tester-update-cm" not found
I0912 16:41:49.283] +++ exit code: 0
I0912 16:41:49.313] Recording: run_kubectl_create_kustomization_directory_tests
I0912 16:41:49.313] Running command: run_kubectl_create_kustomization_directory_tests
I0912 16:41:49.334] 
I0912 16:41:49.336] +++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
I0912 16:41:51.797] valid-pod   0/1     Pending   0          0s
I0912 16:41:51.797] has:valid-pod
I0912 16:41:52.875] Successful
I0912 16:41:52.876] message:NAME        READY   STATUS    RESTARTS   AGE
I0912 16:41:52.876] valid-pod   0/1     Pending   0          0s
I0912 16:41:52.876] STATUS      REASON          MESSAGE
I0912 16:41:52.876] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0912 16:41:52.876] has:Timeout exceeded while reading body
I0912 16:41:52.952] Successful
I0912 16:41:52.953] message:NAME        READY   STATUS    RESTARTS   AGE
I0912 16:41:52.953] valid-pod   0/1     Pending   0          1s
I0912 16:41:52.953] has:valid-pod
I0912 16:41:53.020] Successful
I0912 16:41:53.021] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0912 16:41:53.021] has:Invalid timeout value
I0912 16:41:53.096] pod "valid-pod" deleted
I0912 16:41:53.113] +++ exit code: 0
I0912 16:41:53.140] Recording: run_crd_tests
I0912 16:41:53.140] Running command: run_crd_tests
I0912 16:41:53.160] 
... skipping 158 lines ...
I0912 16:41:57.351] foo.company.com/test patched
I0912 16:41:57.435] crd.sh:236: Successful get foos/test {{.patched}}: value1
I0912 16:41:57.515] (Bfoo.company.com/test patched
I0912 16:41:57.596] crd.sh:238: Successful get foos/test {{.patched}}: value2
I0912 16:41:57.669] (Bfoo.company.com/test patched
I0912 16:41:57.753] crd.sh:240: Successful get foos/test {{.patched}}: <no value>
I0912 16:41:57.914] (B+++ [0912 16:41:57] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0912 16:41:57.979] {
I0912 16:41:57.980]     "apiVersion": "company.com/v1",
I0912 16:41:57.980]     "kind": "Foo",
I0912 16:41:57.980]     "metadata": {
I0912 16:41:57.980]         "annotations": {
I0912 16:41:57.980]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 192 lines ...
I0912 16:42:27.740] crd.sh:455: Successful get bars {{len .items}}: 1
I0912 16:42:27.823] (Bnamespace "non-native-resources" deleted
I0912 16:42:33.008] crd.sh:458: Successful get bars {{len .items}}: 0
I0912 16:42:33.160] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0912 16:42:33.248] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0912 16:42:33.342] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
W0912 16:42:33.443] Error from server (NotFound): namespaces "non-native-resources" not found
I0912 16:42:33.543] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0912 16:42:33.543] +++ exit code: 0
I0912 16:42:33.544] Recording: run_cmd_with_img_tests
I0912 16:42:33.544] Running command: run_cmd_with_img_tests
I0912 16:42:33.544] 
I0912 16:42:33.544] +++ Running case: test-cmd.run_cmd_with_img_tests 
... skipping 8 lines ...
W0912 16:42:33.801] I0912 16:42:33.800965   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306553-29460", Name:"test1-6cdffdb5b8", UID:"3cf5d9c6-696e-4918-a9dd-ea25939f82f2", APIVersion:"apps/v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-f7zqx
I0912 16:42:33.902] Successful
I0912 16:42:33.902] message:deployment.apps/test1 created
I0912 16:42:33.903] has:deployment.apps/test1 created
I0912 16:42:33.903] deployment.apps "test1" deleted
I0912 16:42:33.961] Successful
I0912 16:42:33.961] message:error: Invalid image name "InvalidImageName": invalid reference format
I0912 16:42:33.961] has:error: Invalid image name "InvalidImageName": invalid reference format
I0912 16:42:33.973] +++ exit code: 0
I0912 16:42:34.004] +++ [0912 16:42:34] Testing recursive resources
I0912 16:42:34.009] +++ [0912 16:42:34] Creating namespace namespace-1568306554-3827
I0912 16:42:34.075] namespace/namespace-1568306554-3827 created
I0912 16:42:34.139] Context "test" modified.
I0912 16:42:34.221] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:34.495] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:34.497] (BSuccessful
I0912 16:42:34.498] message:pod/busybox0 created
I0912 16:42:34.498] pod/busybox1 created
I0912 16:42:34.498] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0912 16:42:34.499] has:error validating data: kind not set
I0912 16:42:34.581] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:34.741] (Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0912 16:42:34.743] (BSuccessful
I0912 16:42:34.744] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:34.744] has:Object 'Kind' is missing
I0912 16:42:34.827] generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:35.071] (Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0912 16:42:35.073] (BSuccessful
I0912 16:42:35.073] message:pod/busybox0 replaced
I0912 16:42:35.073] pod/busybox1 replaced
I0912 16:42:35.074] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0912 16:42:35.074] has:error validating data: kind not set
I0912 16:42:35.162] generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:35.256] (BSuccessful
I0912 16:42:35.257] message:Name:         busybox0
I0912 16:42:35.257] Namespace:    namespace-1568306554-3827
I0912 16:42:35.258] Priority:     0
I0912 16:42:35.258] Node:         <none>
... skipping 159 lines ...
I0912 16:42:35.271] has:Object 'Kind' is missing
I0912 16:42:35.369] generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:35.549] (Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0912 16:42:35.551] (BSuccessful
I0912 16:42:35.551] message:pod/busybox0 annotated
I0912 16:42:35.551] pod/busybox1 annotated
I0912 16:42:35.552] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:35.552] has:Object 'Kind' is missing
I0912 16:42:35.641] generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:35.919] (Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0912 16:42:35.922] (BSuccessful
I0912 16:42:35.922] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0912 16:42:35.922] pod/busybox0 configured
I0912 16:42:35.922] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0912 16:42:35.922] pod/busybox1 configured
I0912 16:42:35.922] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0912 16:42:35.923] has:error validating data: kind not set
I0912 16:42:36.013] generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:36.210] (Bdeployment.apps/nginx created
I0912 16:42:36.309] generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0912 16:42:36.391] (Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0912 16:42:36.553] (Bgeneric-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
I0912 16:42:36.555] (BSuccessful
... skipping 42 lines ...
I0912 16:42:36.634] deployment.apps "nginx" deleted
I0912 16:42:36.731] generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:36.896] (Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:36.898] (BSuccessful
I0912 16:42:36.899] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0912 16:42:36.899] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0912 16:42:36.899] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:36.900] has:Object 'Kind' is missing
I0912 16:42:36.988] generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:37.078] (BSuccessful
I0912 16:42:37.078] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:37.078] has:busybox0:busybox1:
I0912 16:42:37.080] Successful
I0912 16:42:37.081] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:37.081] has:Object 'Kind' is missing
I0912 16:42:37.172] generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:37.261] (Bpod/busybox0 labeled
I0912 16:42:37.261] pod/busybox1 labeled
I0912 16:42:37.261] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:37.347] generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0912 16:42:37.349] (BSuccessful
I0912 16:42:37.350] message:pod/busybox0 labeled
I0912 16:42:37.350] pod/busybox1 labeled
I0912 16:42:37.350] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:37.350] has:Object 'Kind' is missing
I0912 16:42:37.436] generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:37.515] (Bpod/busybox0 patched
I0912 16:42:37.515] pod/busybox1 patched
I0912 16:42:37.516] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:37.599] generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0912 16:42:37.601] (BSuccessful
I0912 16:42:37.601] message:pod/busybox0 patched
I0912 16:42:37.602] pod/busybox1 patched
I0912 16:42:37.602] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:37.602] has:Object 'Kind' is missing
I0912 16:42:37.685] generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:37.853] (Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:37.856] (BSuccessful
I0912 16:42:37.856] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0912 16:42:37.856] pod "busybox0" force deleted
I0912 16:42:37.856] pod "busybox1" force deleted
I0912 16:42:37.857] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0912 16:42:37.857] has:Object 'Kind' is missing
I0912 16:42:37.940] generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:38.084] (Breplicationcontroller/busybox0 created
I0912 16:42:38.089] replicationcontroller/busybox1 created
I0912 16:42:38.181] generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:38.269] (Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:38.354] (Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
I0912 16:42:38.438] (Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
I0912 16:42:38.605] (Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0912 16:42:38.690] (Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0912 16:42:38.692] (BSuccessful
I0912 16:42:38.692] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0912 16:42:38.692] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0912 16:42:38.693] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:38.693] has:Object 'Kind' is missing
I0912 16:42:38.772] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0912 16:42:38.861] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0912 16:42:38.952] generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:39.040] (Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
I0912 16:42:39.127] (Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
I0912 16:42:39.316] (Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0912 16:42:39.408] (Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0912 16:42:39.410] (BSuccessful
I0912 16:42:39.410] message:service/busybox0 exposed
I0912 16:42:39.410] service/busybox1 exposed
I0912 16:42:39.411] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:39.411] has:Object 'Kind' is missing
I0912 16:42:39.504] generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:39.597] (Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
I0912 16:42:39.690] (Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
I0912 16:42:39.898] (Bgeneric-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
I0912 16:42:39.993] (Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
I0912 16:42:39.995] (BSuccessful
I0912 16:42:39.996] message:replicationcontroller/busybox0 scaled
I0912 16:42:39.996] replicationcontroller/busybox1 scaled
I0912 16:42:39.996] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:39.996] has:Object 'Kind' is missing
I0912 16:42:40.095] generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:40.292] (Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:40.294] (BSuccessful
I0912 16:42:40.294] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0912 16:42:40.295] replicationcontroller "busybox0" force deleted
I0912 16:42:40.295] replicationcontroller "busybox1" force deleted
I0912 16:42:40.295] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:40.296] has:Object 'Kind' is missing
I0912 16:42:40.380] generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:40.532] (Bdeployment.apps/nginx1-deployment created
I0912 16:42:40.542] deployment.apps/nginx0-deployment created
I0912 16:42:40.642] generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0912 16:42:40.726] (Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0912 16:42:40.909] (Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0912 16:42:40.911] (BSuccessful
I0912 16:42:40.912] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0912 16:42:40.912] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0912 16:42:40.912] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0912 16:42:40.912] has:Object 'Kind' is missing
I0912 16:42:40.993] deployment.apps/nginx1-deployment paused
I0912 16:42:40.996] deployment.apps/nginx0-deployment paused
I0912 16:42:41.085] generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0912 16:42:41.086] (BSuccessful
I0912 16:42:41.087] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0912 16:42:41.357] 1         <none>
I0912 16:42:41.357] 
I0912 16:42:41.357] deployment.apps/nginx0-deployment 
I0912 16:42:41.357] REVISION  CHANGE-CAUSE
I0912 16:42:41.357] 1         <none>
I0912 16:42:41.357] 
I0912 16:42:41.358] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0912 16:42:41.358] has:nginx0-deployment
I0912 16:42:41.359] Successful
I0912 16:42:41.359] message:deployment.apps/nginx1-deployment 
I0912 16:42:41.359] REVISION  CHANGE-CAUSE
I0912 16:42:41.359] 1         <none>
I0912 16:42:41.359] 
I0912 16:42:41.359] deployment.apps/nginx0-deployment 
I0912 16:42:41.359] REVISION  CHANGE-CAUSE
I0912 16:42:41.359] 1         <none>
I0912 16:42:41.359] 
I0912 16:42:41.360] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0912 16:42:41.360] has:nginx1-deployment
I0912 16:42:41.361] Successful
I0912 16:42:41.361] message:deployment.apps/nginx1-deployment 
I0912 16:42:41.361] REVISION  CHANGE-CAUSE
I0912 16:42:41.361] 1         <none>
I0912 16:42:41.361] 
I0912 16:42:41.362] deployment.apps/nginx0-deployment 
I0912 16:42:41.362] REVISION  CHANGE-CAUSE
I0912 16:42:41.362] 1         <none>
I0912 16:42:41.362] 
I0912 16:42:41.362] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0912 16:42:41.362] has:Object 'Kind' is missing
I0912 16:42:41.432] deployment.apps "nginx1-deployment" force deleted
I0912 16:42:41.436] deployment.apps "nginx0-deployment" force deleted
W0912 16:42:41.537] W0912 16:42:34.166310   49456 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0912 16:42:41.537] E0912 16:42:34.167648   52998 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.537] W0912 16:42:34.254346   49456 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0912 16:42:41.537] E0912 16:42:34.255978   52998 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.538] W0912 16:42:34.349179   49456 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0912 16:42:41.538] E0912 16:42:34.350880   52998 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.538] W0912 16:42:34.452771   49456 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
W0912 16:42:41.538] E0912 16:42:34.454498   52998 reflector.go:280] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.538] E0912 16:42:35.169032   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.538] E0912 16:42:35.257368   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.539] E0912 16:42:35.356309   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.539] E0912 16:42:35.455662   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.539] E0912 16:42:36.170680   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.539] I0912 16:42:36.214255   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306554-3827", Name:"nginx", UID:"0c95b910-a807-4eb2-a361-23234e2c9a8a", APIVersion:"apps/v1", ResourceVersion:"950", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
W0912 16:42:41.540] I0912 16:42:36.217670   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306554-3827", Name:"nginx-f87d999f7", UID:"0ab0b3dc-cce5-4148-b296-e2ebb7beb22c", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-hmwz5
W0912 16:42:41.540] I0912 16:42:36.220422   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306554-3827", Name:"nginx-f87d999f7", UID:"0ab0b3dc-cce5-4148-b296-e2ebb7beb22c", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-h7tvk
W0912 16:42:41.540] I0912 16:42:36.220577   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306554-3827", Name:"nginx-f87d999f7", UID:"0ab0b3dc-cce5-4148-b296-e2ebb7beb22c", APIVersion:"apps/v1", ResourceVersion:"951", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-z8wlm
W0912 16:42:41.541] E0912 16:42:36.258770   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.541] E0912 16:42:36.357551   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.541] kubectl convert is DEPRECATED and will be removed in a future version.
W0912 16:42:41.541] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0912 16:42:41.541] E0912 16:42:36.457159   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.541] E0912 16:42:37.173442   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.542] E0912 16:42:37.259825   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.542] E0912 16:42:37.359060   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.542] E0912 16:42:37.458794   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.542] I0912 16:42:37.917990   52998 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0912 16:42:41.542] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0912 16:42:41.543] I0912 16:42:38.089189   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306554-3827", Name:"busybox0", UID:"d24d01c3-9634-4105-be4e-a7c2a975bd5e", APIVersion:"v1", ResourceVersion:"981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-bd6pj
W0912 16:42:41.543] I0912 16:42:38.090634   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306554-3827", Name:"busybox1", UID:"4d53c348-7d03-4f6e-aed8-2eb72176445d", APIVersion:"v1", ResourceVersion:"983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xw7bt
W0912 16:42:41.543] E0912 16:42:38.174623   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.543] E0912 16:42:38.260884   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.544] E0912 16:42:38.360428   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.544] E0912 16:42:38.460209   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.544] E0912 16:42:39.175956   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.544] E0912 16:42:39.262286   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.544] E0912 16:42:39.361600   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.545] E0912 16:42:39.461560   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.545] I0912 16:42:39.785116   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306554-3827", Name:"busybox0", UID:"d24d01c3-9634-4105-be4e-a7c2a975bd5e", APIVersion:"v1", ResourceVersion:"1003", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-jwrwr
W0912 16:42:41.545] I0912 16:42:39.796765   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306554-3827", Name:"busybox1", UID:"4d53c348-7d03-4f6e-aed8-2eb72176445d", APIVersion:"v1", ResourceVersion:"1007", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qzpzz
W0912 16:42:41.546] E0912 16:42:40.177326   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.546] E0912 16:42:40.264018   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.546] E0912 16:42:40.363271   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.546] E0912 16:42:40.462842   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.546] I0912 16:42:40.540383   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306554-3827", Name:"nginx1-deployment", UID:"cfb9b390-38a1-42c7-be2c-ce1b156cf1fc", APIVersion:"apps/v1", ResourceVersion:"1023", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
W0912 16:42:41.547] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0912 16:42:41.547] I0912 16:42:40.543397   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306554-3827", Name:"nginx1-deployment-7bdbbfb5cf", UID:"18a27e76-05d5-42f2-9e31-1540948d3a2a", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-4h242
W0912 16:42:41.547] I0912 16:42:40.545446   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306554-3827", Name:"nginx0-deployment", UID:"311bb330-d8a3-4952-b2eb-e77bfc5d9b8b", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
W0912 16:42:41.548] I0912 16:42:40.547088   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306554-3827", Name:"nginx1-deployment-7bdbbfb5cf", UID:"18a27e76-05d5-42f2-9e31-1540948d3a2a", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-8qdcb
W0912 16:42:41.548] I0912 16:42:40.549411   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306554-3827", Name:"nginx0-deployment-57c6bff7f6", UID:"81898039-86c1-4fca-b539-d99917bc5261", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-mr5fv
W0912 16:42:41.549] I0912 16:42:40.552913   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306554-3827", Name:"nginx0-deployment-57c6bff7f6", UID:"81898039-86c1-4fca-b539-d99917bc5261", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-nr77j
W0912 16:42:41.549] E0912 16:42:41.178260   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.549] E0912 16:42:41.265105   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.549] E0912 16:42:41.364509   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:41.549] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0912 16:42:41.550] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0912 16:42:41.550] E0912 16:42:41.464380   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:42.180] E0912 16:42:42.179573   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:42.267] E0912 16:42:42.266537   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:42.366] E0912 16:42:42.365887   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:42.466] E0912 16:42:42.465727   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:42:42.567] generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:42.673] (Breplicationcontroller/busybox0 created
I0912 16:42:42.678] replicationcontroller/busybox1 created
I0912 16:42:42.773] generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0912 16:42:42.861] (BSuccessful
I0912 16:42:42.862] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0912 16:42:42.864] message:no rollbacker has been implemented for "ReplicationController"
I0912 16:42:42.864] no rollbacker has been implemented for "ReplicationController"
I0912 16:42:42.864] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:42.864] has:Object 'Kind' is missing
I0912 16:42:42.951] Successful
I0912 16:42:42.952] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:42.952] error: replicationcontrollers "busybox0" pausing is not supported
I0912 16:42:42.953] error: replicationcontrollers "busybox1" pausing is not supported
I0912 16:42:42.953] has:Object 'Kind' is missing
I0912 16:42:42.953] Successful
I0912 16:42:42.954] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:42.954] error: replicationcontrollers "busybox0" pausing is not supported
I0912 16:42:42.954] error: replicationcontrollers "busybox1" pausing is not supported
I0912 16:42:42.954] has:replicationcontrollers "busybox0" pausing is not supported
I0912 16:42:42.956] Successful
I0912 16:42:42.956] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:42.956] error: replicationcontrollers "busybox0" pausing is not supported
I0912 16:42:42.956] error: replicationcontrollers "busybox1" pausing is not supported
I0912 16:42:42.956] has:replicationcontrollers "busybox1" pausing is not supported
I0912 16:42:43.047] Successful
I0912 16:42:43.047] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:43.047] error: replicationcontrollers "busybox0" resuming is not supported
I0912 16:42:43.048] error: replicationcontrollers "busybox1" resuming is not supported
I0912 16:42:43.048] has:Object 'Kind' is missing
I0912 16:42:43.049] Successful
I0912 16:42:43.049] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:43.050] error: replicationcontrollers "busybox0" resuming is not supported
I0912 16:42:43.050] error: replicationcontrollers "busybox1" resuming is not supported
I0912 16:42:43.050] has:replicationcontrollers "busybox0" resuming is not supported
I0912 16:42:43.051] Successful
I0912 16:42:43.052] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0912 16:42:43.052] error: replicationcontrollers "busybox0" resuming is not supported
I0912 16:42:43.052] error: replicationcontrollers "busybox1" resuming is not supported
I0912 16:42:43.052] has:replicationcontrollers "busybox0" resuming is not supported
I0912 16:42:43.123] replicationcontroller "busybox0" force deleted
I0912 16:42:43.128] replicationcontroller "busybox1" force deleted
W0912 16:42:43.229] I0912 16:42:42.676836   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306554-3827", Name:"busybox0", UID:"b34de046-9b50-499f-b75f-b7f656789110", APIVersion:"v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-v89sc
W0912 16:42:43.229] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0912 16:42:43.230] I0912 16:42:42.681989   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306554-3827", Name:"busybox1", UID:"9b461349-b43e-4c77-bf30-66efe843f20d", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-4hwxf
W0912 16:42:43.230] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0912 16:42:43.230] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0912 16:42:43.231] E0912 16:42:43.180953   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:43.268] E0912 16:42:43.267853   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:43.367] E0912 16:42:43.367287   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:43.468] E0912 16:42:43.467370   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:42:44.135] Recording: run_namespace_tests
I0912 16:42:44.135] Running command: run_namespace_tests
I0912 16:42:44.158] 
I0912 16:42:44.161] +++ Running case: test-cmd.run_namespace_tests 
I0912 16:42:44.163] +++ working dir: /go/src/k8s.io/kubernetes
I0912 16:42:44.166] +++ command: run_namespace_tests
I0912 16:42:44.174] +++ [0912 16:42:44] Testing kubectl(v1:namespaces)
I0912 16:42:44.246] namespace/my-namespace created
I0912 16:42:44.336] core.sh:1308: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0912 16:42:44.411] (Bnamespace "my-namespace" deleted
W0912 16:42:44.512] E0912 16:42:44.182308   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:44.512] E0912 16:42:44.269267   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:44.513] E0912 16:42:44.368844   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:44.513] E0912 16:42:44.468661   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:45.184] E0912 16:42:45.183644   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:45.271] E0912 16:42:45.270681   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:45.370] E0912 16:42:45.370136   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:45.470] E0912 16:42:45.470115   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:46.185] E0912 16:42:46.185231   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:46.272] E0912 16:42:46.272274   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:46.372] E0912 16:42:46.371632   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:46.472] E0912 16:42:46.472144   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:47.187] E0912 16:42:47.186655   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:47.274] E0912 16:42:47.273848   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:47.373] E0912 16:42:47.373036   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:47.474] E0912 16:42:47.473586   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:48.189] E0912 16:42:48.188137   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:48.275] E0912 16:42:48.275249   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:48.375] E0912 16:42:48.374770   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:48.475] E0912 16:42:48.474976   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:49.190] E0912 16:42:49.189521   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:49.277] E0912 16:42:49.276581   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:49.378] E0912 16:42:49.378093   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:49.476] E0912 16:42:49.475745   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:42:49.577] namespace/my-namespace condition met
I0912 16:42:49.596] Successful
I0912 16:42:49.597] message:Error from server (NotFound): namespaces "my-namespace" not found
I0912 16:42:49.597] has: not found
I0912 16:42:49.671] namespace/my-namespace created
I0912 16:42:49.764] core.sh:1317: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0912 16:42:50.003] (BSuccessful
I0912 16:42:50.003] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0912 16:42:50.003] namespace "kube-node-lease" deleted
... skipping 29 lines ...
I0912 16:42:50.006] namespace "namespace-1568306510-3731" deleted
I0912 16:42:50.006] namespace "namespace-1568306511-13093" deleted
I0912 16:42:50.007] namespace "namespace-1568306513-13331" deleted
I0912 16:42:50.007] namespace "namespace-1568306514-2523" deleted
I0912 16:42:50.007] namespace "namespace-1568306553-29460" deleted
I0912 16:42:50.007] namespace "namespace-1568306554-3827" deleted
I0912 16:42:50.007] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0912 16:42:50.007] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0912 16:42:50.007] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0912 16:42:50.007] has:warning: deleting cluster-scoped resources
I0912 16:42:50.007] Successful
I0912 16:42:50.008] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0912 16:42:50.008] namespace "kube-node-lease" deleted
I0912 16:42:50.008] namespace "my-namespace" deleted
I0912 16:42:50.008] namespace "namespace-1568306420-4831" deleted
... skipping 27 lines ...
I0912 16:42:50.011] namespace "namespace-1568306510-3731" deleted
I0912 16:42:50.011] namespace "namespace-1568306511-13093" deleted
I0912 16:42:50.011] namespace "namespace-1568306513-13331" deleted
I0912 16:42:50.011] namespace "namespace-1568306514-2523" deleted
I0912 16:42:50.011] namespace "namespace-1568306553-29460" deleted
I0912 16:42:50.011] namespace "namespace-1568306554-3827" deleted
I0912 16:42:50.011] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0912 16:42:50.011] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0912 16:42:50.012] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0912 16:42:50.012] has:namespace "my-namespace" deleted
I0912 16:42:50.114] core.sh:1329: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
I0912 16:42:50.190] (Bnamespace/other created
I0912 16:42:50.280] core.sh:1333: Successful get namespaces/other {{.metadata.name}}: other
I0912 16:42:50.366] (Bcore.sh:1337: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:50.519] (Bpod/valid-pod created
I0912 16:42:50.616] core.sh:1341: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0912 16:42:50.706] (Bcore.sh:1343: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0912 16:42:50.788] (BSuccessful
I0912 16:42:50.788] message:error: a resource cannot be retrieved by name across all namespaces
I0912 16:42:50.789] has:a resource cannot be retrieved by name across all namespaces
I0912 16:42:50.872] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0912 16:42:50.945] (Bpod "valid-pod" force deleted
I0912 16:42:51.035] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:51.109] (Bnamespace "other" deleted
W0912 16:42:51.210] E0912 16:42:50.191185   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:51.210] E0912 16:42:50.278054   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:51.210] E0912 16:42:50.379530   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:51.210] E0912 16:42:50.477062   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:51.211] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0912 16:42:51.211] E0912 16:42:51.192711   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:51.280] E0912 16:42:51.279391   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:51.381] E0912 16:42:51.380860   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:51.479] E0912 16:42:51.478495   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:51.757] I0912 16:42:51.756837   52998 shared_informer.go:197] Waiting for caches to sync for resource quota
W0912 16:42:51.757] I0912 16:42:51.756887   52998 shared_informer.go:204] Caches are synced for resource quota 
W0912 16:42:52.194] E0912 16:42:52.194199   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:52.209] I0912 16:42:52.208597   52998 shared_informer.go:197] Waiting for caches to sync for garbage collector
W0912 16:42:52.209] I0912 16:42:52.208749   52998 shared_informer.go:204] Caches are synced for garbage collector 
W0912 16:42:52.282] E0912 16:42:52.281659   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:52.383] E0912 16:42:52.382415   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:52.480] E0912 16:42:52.480147   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:53.196] E0912 16:42:53.195558   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:53.283] E0912 16:42:53.283130   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:53.384] E0912 16:42:53.383678   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:53.482] E0912 16:42:53.481546   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:53.510] I0912 16:42:53.509887   52998 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1568306554-3827
W0912 16:42:53.513] I0912 16:42:53.512798   52998 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1568306554-3827
W0912 16:42:54.197] E0912 16:42:54.196843   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:54.285] E0912 16:42:54.284547   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:54.385] E0912 16:42:54.384972   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:54.483] E0912 16:42:54.483038   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:55.199] E0912 16:42:55.198998   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:55.286] E0912 16:42:55.285867   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:55.388] E0912 16:42:55.387401   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:55.484] E0912 16:42:55.483995   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:56.201] E0912 16:42:56.200525   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:56.288] E0912 16:42:56.287414   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:42:56.388] +++ exit code: 0
I0912 16:42:56.388] Recording: run_secrets_test
I0912 16:42:56.389] Running command: run_secrets_test
I0912 16:42:56.389] 
I0912 16:42:56.389] +++ Running case: test-cmd.run_secrets_test 
I0912 16:42:56.389] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 57 lines ...
I0912 16:42:58.099] (Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0912 16:42:58.190] (Bsecret "test-secret" deleted
I0912 16:42:58.276] secret/test-secret created
I0912 16:42:58.366] core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
I0912 16:42:58.454] (Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
I0912 16:42:58.531] (Bsecret "test-secret" deleted
W0912 16:42:58.632] E0912 16:42:56.388761   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.633] E0912 16:42:56.485401   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.633] I0912 16:42:56.514889   69106 loader.go:375] Config loaded from file:  /tmp/tmp.cpkGYM70Ly/.kube/config
W0912 16:42:58.633] E0912 16:42:57.201861   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.633] E0912 16:42:57.288921   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.633] E0912 16:42:57.389959   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.634] E0912 16:42:57.486731   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.634] E0912 16:42:58.203132   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.634] E0912 16:42:58.290376   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.634] E0912 16:42:58.391375   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:58.634] E0912 16:42:58.488577   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:42:58.735] secret/secret-string-data created
I0912 16:42:58.791] core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0912 16:42:58.881] (Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
I0912 16:42:58.971] (Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
I0912 16:42:59.045] (Bsecret "secret-string-data" deleted
I0912 16:42:59.140] core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:42:59.320] (Bsecret "test-secret" deleted
I0912 16:42:59.409] namespace "test-secrets" deleted
W0912 16:42:59.512] E0912 16:42:59.204378   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:59.512] E0912 16:42:59.292019   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:59.513] E0912 16:42:59.392696   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:59.513] E0912 16:42:59.490161   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:42:59.579] I0912 16:42:59.578660   52998 namespace_controller.go:171] Namespace has been deleted my-namespace
W0912 16:43:00.040] I0912 16:43:00.040290   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306420-4831
W0912 16:43:00.044] I0912 16:43:00.044359   52998 namespace_controller.go:171] Namespace has been deleted kube-node-lease
W0912 16:43:00.085] I0912 16:43:00.084896   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306439-3541
W0912 16:43:00.086] I0912 16:43:00.085834   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306437-22940
W0912 16:43:00.086] I0912 16:43:00.085849   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306425-30134
W0912 16:43:00.095] I0912 16:43:00.094982   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306434-19102
W0912 16:43:00.106] I0912 16:43:00.105708   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306422-14991
W0912 16:43:00.107] I0912 16:43:00.106844   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306438-9832
W0912 16:43:00.109] I0912 16:43:00.109459   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306433-7755
W0912 16:43:00.110] I0912 16:43:00.109470   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306431-27502
W0912 16:43:00.206] E0912 16:43:00.206149   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:00.256] I0912 16:43:00.255768   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306448-13488
W0912 16:43:00.258] I0912 16:43:00.258092   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306460-25702
W0912 16:43:00.264] I0912 16:43:00.264167   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306449-18611
W0912 16:43:00.268] I0912 16:43:00.267962   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306463-28741
W0912 16:43:00.287] I0912 16:43:00.286440   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306461-1007
W0912 16:43:00.294] E0912 16:43:00.293435   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:00.294] I0912 16:43:00.293539   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306464-22872
W0912 16:43:00.298] I0912 16:43:00.298436   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306468-21799
W0912 16:43:00.310] I0912 16:43:00.309864   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306470-4309
W0912 16:43:00.312] I0912 16:43:00.311718   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306467-15688
W0912 16:43:00.314] I0912 16:43:00.314068   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306464-31572
W0912 16:43:00.394] E0912 16:43:00.394191   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:00.434] I0912 16:43:00.433477   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306473-4726
W0912 16:43:00.444] I0912 16:43:00.443721   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306491-21551
W0912 16:43:00.465] I0912 16:43:00.465298   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306492-627
W0912 16:43:00.475] I0912 16:43:00.474651   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306490-4540
W0912 16:43:00.476] I0912 16:43:00.476564   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306473-24893
W0912 16:43:00.486] I0912 16:43:00.485989   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306506-20713
W0912 16:43:00.492] E0912 16:43:00.491505   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:00.495] I0912 16:43:00.495042   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306500-21343
W0912 16:43:00.511] I0912 16:43:00.511204   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306510-3731
W0912 16:43:00.520] I0912 16:43:00.520442   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306510-31163
W0912 16:43:00.524] I0912 16:43:00.523537   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306507-10975
W0912 16:43:00.581] I0912 16:43:00.580771   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306511-13093
W0912 16:43:00.594] I0912 16:43:00.593686   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306513-13331
W0912 16:43:00.603] I0912 16:43:00.602735   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306514-2523
W0912 16:43:00.613] I0912 16:43:00.612383   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306553-29460
W0912 16:43:00.658] I0912 16:43:00.657602   52998 namespace_controller.go:171] Namespace has been deleted namespace-1568306554-3827
W0912 16:43:01.194] I0912 16:43:01.193879   52998 namespace_controller.go:171] Namespace has been deleted other
W0912 16:43:01.208] E0912 16:43:01.207638   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:01.295] E0912 16:43:01.294794   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:01.396] E0912 16:43:01.395439   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:01.493] E0912 16:43:01.492853   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:02.209] E0912 16:43:02.208998   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:02.296] E0912 16:43:02.296141   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:02.397] E0912 16:43:02.396729   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:02.494] E0912 16:43:02.494145   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:03.211] E0912 16:43:03.210371   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:03.298] E0912 16:43:03.297464   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:03.398] E0912 16:43:03.398165   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:03.496] E0912 16:43:03.495459   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:04.212] E0912 16:43:04.211857   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:04.299] E0912 16:43:04.298988   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:04.400] E0912 16:43:04.399726   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:04.497] E0912 16:43:04.496911   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:04.598] +++ exit code: 0
I0912 16:43:04.598] Recording: run_configmap_tests
I0912 16:43:04.598] Running command: run_configmap_tests
I0912 16:43:04.598] 
I0912 16:43:04.598] +++ Running case: test-cmd.run_configmap_tests 
I0912 16:43:04.598] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 14 lines ...
I0912 16:43:05.679] configmap/test-binary-configmap created
I0912 16:43:05.765] core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
I0912 16:43:05.851] (Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
I0912 16:43:06.081] (Bconfigmap "test-configmap" deleted
I0912 16:43:06.162] configmap "test-binary-configmap" deleted
I0912 16:43:06.243] namespace "test-configmaps" deleted
W0912 16:43:06.344] E0912 16:43:05.213281   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:06.344] E0912 16:43:05.300275   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:06.345] E0912 16:43:05.401238   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:06.345] E0912 16:43:05.498385   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:06.345] E0912 16:43:06.215131   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:06.346] E0912 16:43:06.301777   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:06.403] E0912 16:43:06.402624   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:06.500] E0912 16:43:06.499969   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:07.217] E0912 16:43:07.216718   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:07.303] E0912 16:43:07.303257   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:07.404] E0912 16:43:07.403985   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:07.502] E0912 16:43:07.501422   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:08.218] E0912 16:43:08.218234   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:08.305] E0912 16:43:08.304718   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:08.406] E0912 16:43:08.405473   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:08.503] E0912 16:43:08.502878   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:09.220] E0912 16:43:09.219504   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:09.307] E0912 16:43:09.306465   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:09.408] E0912 16:43:09.407372   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:09.490] I0912 16:43:09.489639   52998 namespace_controller.go:171] Namespace has been deleted test-secrets
W0912 16:43:09.504] E0912 16:43:09.504070   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:10.221] E0912 16:43:10.221017   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:10.308] E0912 16:43:10.307858   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:10.409] E0912 16:43:10.409013   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:10.506] E0912 16:43:10.505667   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:11.223] E0912 16:43:11.222582   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:11.309] E0912 16:43:11.308740   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:11.410] +++ exit code: 0
I0912 16:43:11.410] Recording: run_client_config_tests
I0912 16:43:11.410] Running command: run_client_config_tests
I0912 16:43:11.410] 
I0912 16:43:11.410] +++ Running case: test-cmd.run_client_config_tests 
I0912 16:43:11.410] +++ working dir: /go/src/k8s.io/kubernetes
I0912 16:43:11.411] +++ command: run_client_config_tests
I0912 16:43:11.420] +++ [0912 16:43:11] Creating namespace namespace-1568306591-17809
I0912 16:43:11.490] namespace/namespace-1568306591-17809 created
I0912 16:43:11.556] Context "test" modified.
I0912 16:43:11.561] +++ [0912 16:43:11] Testing client config
I0912 16:43:11.628] Successful
I0912 16:43:11.629] message:error: stat missing: no such file or directory
I0912 16:43:11.629] has:missing: no such file or directory
I0912 16:43:11.694] Successful
I0912 16:43:11.695] message:error: stat missing: no such file or directory
I0912 16:43:11.695] has:missing: no such file or directory
I0912 16:43:11.759] Successful
I0912 16:43:11.760] message:error: stat missing: no such file or directory
I0912 16:43:11.760] has:missing: no such file or directory
I0912 16:43:11.826] Successful
I0912 16:43:11.826] message:Error in configuration: context was not found for specified context: missing-context
I0912 16:43:11.826] has:context was not found for specified context: missing-context
I0912 16:43:11.891] Successful
I0912 16:43:11.892] message:error: no server found for cluster "missing-cluster"
I0912 16:43:11.892] has:no server found for cluster "missing-cluster"
I0912 16:43:11.958] Successful
I0912 16:43:11.958] message:error: auth info "missing-user" does not exist
I0912 16:43:11.958] has:auth info "missing-user" does not exist
W0912 16:43:12.059] E0912 16:43:11.410552   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:12.059] E0912 16:43:11.506923   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:12.160] Successful
I0912 16:43:12.160] message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0912 16:43:12.161] has:error loading config file
I0912 16:43:12.161] Successful
I0912 16:43:12.161] message:error: stat missing-config: no such file or directory
I0912 16:43:12.161] has:no such file or directory
I0912 16:43:12.173] +++ exit code: 0
I0912 16:43:12.206] Recording: run_service_accounts_tests
I0912 16:43:12.206] Running command: run_service_accounts_tests
I0912 16:43:12.227] 
I0912 16:43:12.230] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 7 lines ...
I0912 16:43:12.558] (Bnamespace/test-service-accounts created
I0912 16:43:12.647] core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
I0912 16:43:12.719] (Bserviceaccount/test-service-account created
I0912 16:43:12.807] core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
I0912 16:43:12.881] (Bserviceaccount "test-service-account" deleted
I0912 16:43:12.960] namespace "test-service-accounts" deleted
W0912 16:43:13.060] E0912 16:43:12.224102   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:13.061] E0912 16:43:12.310290   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:13.061] E0912 16:43:12.412050   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:13.061] E0912 16:43:12.508368   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:13.226] E0912 16:43:13.225380   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:13.312] E0912 16:43:13.311839   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:13.414] E0912 16:43:13.413462   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:13.510] E0912 16:43:13.509734   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:14.227] E0912 16:43:14.226899   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:14.313] E0912 16:43:14.313188   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:14.415] E0912 16:43:14.415090   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:14.511] E0912 16:43:14.511253   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:15.229] E0912 16:43:15.228350   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:15.315] E0912 16:43:15.314773   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:15.417] E0912 16:43:15.416615   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:15.513] E0912 16:43:15.512845   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:16.230] E0912 16:43:16.229780   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:16.316] E0912 16:43:16.316310   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:16.328] I0912 16:43:16.328318   52998 namespace_controller.go:171] Namespace has been deleted test-configmaps
W0912 16:43:16.418] E0912 16:43:16.417994   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:16.515] E0912 16:43:16.514548   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:17.231] E0912 16:43:17.231131   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:17.318] E0912 16:43:17.317854   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:17.420] E0912 16:43:17.419610   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:17.516] E0912 16:43:17.515918   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:18.087] +++ exit code: 0
I0912 16:43:18.119] Recording: run_job_tests
I0912 16:43:18.119] Running command: run_job_tests
I0912 16:43:18.141] 
I0912 16:43:18.143] +++ Running case: test-cmd.run_job_tests 
I0912 16:43:18.145] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 6 lines ...
I0912 16:43:18.500] (Bnamespace/test-jobs created
I0912 16:43:18.602] batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
I0912 16:43:18.690] (Bcronjob.batch/pi created
I0912 16:43:18.791] batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
I0912 16:43:18.871] (BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
I0912 16:43:18.872] pi     59 23 31 2 *   False     0        <none>          0s
W0912 16:43:18.972] E0912 16:43:18.232557   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:18.973] E0912 16:43:18.319206   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:18.973] E0912 16:43:18.421159   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:18.974] E0912 16:43:18.517363   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:18.974] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0912 16:43:19.007] E0912 16:43:19.006787   52998 cronjob_controller.go:272] Cannot determine if test-jobs/pi needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew
W0912 16:43:19.008] I0912 16:43:19.007302   52998 event.go:255] Event(v1.ObjectReference{Kind:"CronJob", Namespace:"test-jobs", Name:"pi", UID:"7e64b844-a529-4d43-9fdc-74075e6ad9c4", APIVersion:"batch/v1beta1", ResourceVersion:"1392", FieldPath:""}): type: 'Warning' reason: 'FailedNeedsStart' Cannot determine if job needs to be started: too many missed start time (> 100). Set or decrease .spec.startingDeadlineSeconds or check clock skew
I0912 16:43:19.109] Name:                          pi
I0912 16:43:19.109] Namespace:                     test-jobs
I0912 16:43:19.109] Labels:                        run=pi
I0912 16:43:19.109] Annotations:                   <none>
I0912 16:43:19.109] Schedule:                      59 23 31 2 *
I0912 16:43:19.109] Concurrency Policy:            Allow
I0912 16:43:19.110] Suspend:                       False
I0912 16:43:19.110] Successful Job History Limit:  3
I0912 16:43:19.110] Failed Job History Limit:      1
I0912 16:43:19.110] Starting Deadline Seconds:     <unset>
I0912 16:43:19.110] Selector:                      <unset>
I0912 16:43:19.110] Parallelism:                   <unset>
I0912 16:43:19.110] Completions:                   <unset>
I0912 16:43:19.110] Pod Template:
I0912 16:43:19.110]   Labels:  run=pi
... skipping 18 lines ...
I0912 16:43:19.112] Events:              <none>
I0912 16:43:19.112] Successful
I0912 16:43:19.112] message:job.batch/test-job
I0912 16:43:19.112] has:job.batch/test-job
I0912 16:43:19.199] batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
I0912 16:43:19.300] (Bjob.batch/test-job created
W0912 16:43:19.401] E0912 16:43:19.234080   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:19.402] I0912 16:43:19.298299   52998 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"7d9fb3ac-568d-4c28-85f6-0a8ba1ddf742", APIVersion:"batch/v1", ResourceVersion:"1396", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-gcr8w
W0912 16:43:19.402] E0912 16:43:19.321305   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:19.423] E0912 16:43:19.422721   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:19.519] E0912 16:43:19.518817   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:19.620] batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
I0912 16:43:19.620] (BNAME       COMPLETIONS   DURATION   AGE
I0912 16:43:19.620] test-job   0/1           0s         0s
I0912 16:43:19.620] Name:           test-job
I0912 16:43:19.620] Namespace:      test-jobs
I0912 16:43:19.620] Selector:       controller-uid=7d9fb3ac-568d-4c28-85f6-0a8ba1ddf742
... skipping 2 lines ...
I0912 16:43:19.621]                 run=pi
I0912 16:43:19.621] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0912 16:43:19.621] Controlled By:  CronJob/pi
I0912 16:43:19.621] Parallelism:    1
I0912 16:43:19.621] Completions:    1
I0912 16:43:19.621] Start Time:     Thu, 12 Sep 2019 16:43:19 +0000
I0912 16:43:19.621] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0912 16:43:19.621] Pod Template:
I0912 16:43:19.622]   Labels:  controller-uid=7d9fb3ac-568d-4c28-85f6-0a8ba1ddf742
I0912 16:43:19.622]            job-name=test-job
I0912 16:43:19.622]            run=pi
I0912 16:43:19.622]   Containers:
I0912 16:43:19.622]    pi:
... skipping 15 lines ...
I0912 16:43:19.624]   Type    Reason            Age   From            Message
I0912 16:43:19.624]   ----    ------            ----  ----            -------
I0912 16:43:19.624]   Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-gcr8w
I0912 16:43:19.687] job.batch "test-job" deleted
I0912 16:43:19.789] cronjob.batch "pi" deleted
I0912 16:43:19.873] namespace "test-jobs" deleted
W0912 16:43:20.236] E0912 16:43:20.235638   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:20.323] E0912 16:43:20.323104   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:20.424] E0912 16:43:20.424269   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:20.521] E0912 16:43:20.520542   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:21.237] E0912 16:43:21.237148   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:21.325] E0912 16:43:21.324468   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:21.426] E0912 16:43:21.425804   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:21.522] E0912 16:43:21.522089   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:22.239] E0912 16:43:22.238776   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:22.326] E0912 16:43:22.326009   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:22.428] E0912 16:43:22.427629   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:22.524] E0912 16:43:22.523906   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:23.063] I0912 16:43:23.062503   52998 namespace_controller.go:171] Namespace has been deleted test-service-accounts
W0912 16:43:23.241] E0912 16:43:23.240643   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:23.328] E0912 16:43:23.327455   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:23.429] E0912 16:43:23.429186   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:23.526] E0912 16:43:23.525483   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:24.242] E0912 16:43:24.242206   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:24.329] E0912 16:43:24.329202   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:24.431] E0912 16:43:24.430628   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:24.527] E0912 16:43:24.526831   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:24.984] +++ exit code: 0
I0912 16:43:25.014] Recording: run_create_job_tests
I0912 16:43:25.014] Running command: run_create_job_tests
I0912 16:43:25.035] 
I0912 16:43:25.038] +++ Running case: test-cmd.run_create_job_tests 
I0912 16:43:25.040] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 27 lines ...
I0912 16:43:26.379] +++ [0912 16:43:26] Testing pod templates
I0912 16:43:26.465] core.sh:1415: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:43:26.629] (Bpodtemplate/nginx created
I0912 16:43:26.724] core.sh:1419: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0912 16:43:26.796] (BNAME    CONTAINERS   IMAGES   POD LABELS
I0912 16:43:26.797] nginx   nginx        nginx    name=nginx
W0912 16:43:26.897] E0912 16:43:25.243425   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:26.898] I0912 16:43:25.266086   52998 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568306605-26927", Name:"test-job", UID:"a7f66d55-06cf-44c3-8337-bdfce131fb3f", APIVersion:"batch/v1", ResourceVersion:"1415", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-2wvmt
W0912 16:43:26.898] E0912 16:43:25.330432   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:26.899] E0912 16:43:25.431916   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:26.899] I0912 16:43:25.521833   52998 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568306605-26927", Name:"test-job-pi", UID:"247bbf6e-8506-4952-ac18-1ebf064a5f4f", APIVersion:"batch/v1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-q5gbh
W0912 16:43:26.899] E0912 16:43:25.527919   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:26.899] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0912 16:43:26.899] I0912 16:43:25.876098   52998 event.go:255] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1568306605-26927", Name:"my-pi", UID:"98336eca-88e7-4eec-a897-3092957b79d7", APIVersion:"batch/v1", ResourceVersion:"1430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-lh9p6
W0912 16:43:26.900] E0912 16:43:26.244732   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:26.900] E0912 16:43:26.331847   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:26.900] E0912 16:43:26.433263   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:26.900] E0912 16:43:26.529745   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:26.900] I0912 16:43:26.627028   49456 controller.go:606] quota admission added evaluator for: podtemplates
I0912 16:43:27.001] core.sh:1427: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0912 16:43:27.038] (Bpodtemplate "nginx" deleted
I0912 16:43:27.126] core.sh:1431: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
I0912 16:43:27.138] (B+++ exit code: 0
I0912 16:43:27.169] Recording: run_service_tests
... skipping 35 lines ...
I0912 16:43:27.832] Port:              <unset>  6379/TCP
I0912 16:43:27.832] TargetPort:        6379/TCP
I0912 16:43:27.832] Endpoints:         <none>
I0912 16:43:27.832] Session Affinity:  None
I0912 16:43:27.832] Events:            <none>
I0912 16:43:27.832] (B
W0912 16:43:27.933] E0912 16:43:27.246059   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:27.933] E0912 16:43:27.333223   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:27.933] E0912 16:43:27.434557   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:27.933] E0912 16:43:27.531066   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:28.034] core.sh:868: Successful describe
I0912 16:43:28.034] Name:              redis-master
I0912 16:43:28.034] Namespace:         default
I0912 16:43:28.034] Labels:            app=redis
I0912 16:43:28.035]                    role=master
I0912 16:43:28.035]                    tier=backend
... skipping 209 lines ...
I0912 16:43:29.162]   selector:
I0912 16:43:29.162]     role: padawan
I0912 16:43:29.162]   sessionAffinity: None
I0912 16:43:29.162]   type: ClusterIP
I0912 16:43:29.162] status:
I0912 16:43:29.162]   loadBalancer: {}
W0912 16:43:29.263] E0912 16:43:28.247954   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:29.263] E0912 16:43:28.334543   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:29.264] E0912 16:43:28.435923   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:29.264] E0912 16:43:28.532737   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:29.264] error: you must specify resources by --filename when --local is set.
W0912 16:43:29.264] Example resource specifications include:
W0912 16:43:29.265]    '-f rsrc.yaml'
W0912 16:43:29.265]    '--filename=rsrc.json'
W0912 16:43:29.265] E0912 16:43:29.249509   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:29.338] E0912 16:43:29.337364   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:29.438] E0912 16:43:29.437988   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:29.534] E0912 16:43:29.534254   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:29.635] core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0912 16:43:29.636] (Bcore.sh:905: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0912 16:43:29.636] (Bservice "redis-master" deleted
I0912 16:43:29.687] core.sh:912: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0912 16:43:29.778] (Bcore.sh:916: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0912 16:43:29.940] (Bservice/redis-master created
... skipping 6 lines ...
I0912 16:43:30.699] (Bservice "redis-master" deleted
I0912 16:43:30.781] service "service-v1-test" deleted
I0912 16:43:30.870] core.sh:960: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0912 16:43:30.956] (Bcore.sh:964: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
I0912 16:43:31.104] (Bservice/redis-master created
W0912 16:43:31.205] I0912 16:43:29.962377   52998 namespace_controller.go:171] Namespace has been deleted test-jobs
W0912 16:43:31.205] E0912 16:43:30.252606   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:31.205] E0912 16:43:30.338627   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:31.205] E0912 16:43:30.439462   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:31.206] E0912 16:43:30.535368   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:31.254] E0912 16:43:31.254049   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:31.340] E0912 16:43:31.340020   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:31.441] service/redis-slave created
I0912 16:43:31.441] core.sh:969: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
I0912 16:43:31.441] (BSuccessful
I0912 16:43:31.441] message:NAME           RSRC
I0912 16:43:31.442] kubernetes     145
I0912 16:43:31.442] redis-master   1465
... skipping 84 lines ...
I0912 16:43:36.038] (Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0912 16:43:36.122] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0912 16:43:36.219] (Bdaemonset.apps/bind rolled back
I0912 16:43:36.308] apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0912 16:43:36.396] (Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0912 16:43:36.494] (BSuccessful
I0912 16:43:36.494] message:error: unable to find specified revision 1000000 in history
I0912 16:43:36.495] has:unable to find specified revision
I0912 16:43:36.577] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0912 16:43:36.662] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0912 16:43:36.760] (Bdaemonset.apps/bind rolled back
I0912 16:43:36.854] apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0912 16:43:36.938] (Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0912 16:43:38.247] Namespace:    namespace-1568306617-1698
I0912 16:43:38.248] Selector:     app=guestbook,tier=frontend
I0912 16:43:38.248] Labels:       app=guestbook
I0912 16:43:38.248]               tier=frontend
I0912 16:43:38.248] Annotations:  <none>
I0912 16:43:38.248] Replicas:     3 current / 3 desired
I0912 16:43:38.248] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0912 16:43:38.248] Pod Template:
I0912 16:43:38.248]   Labels:  app=guestbook
I0912 16:43:38.248]            tier=frontend
I0912 16:43:38.248]   Containers:
I0912 16:43:38.248]    php-redis:
I0912 16:43:38.248]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0912 16:43:38.372] Namespace:    namespace-1568306617-1698
I0912 16:43:38.372] Selector:     app=guestbook,tier=frontend
I0912 16:43:38.372] Labels:       app=guestbook
I0912 16:43:38.372]               tier=frontend
I0912 16:43:38.372] Annotations:  <none>
I0912 16:43:38.372] Replicas:     3 current / 3 desired
I0912 16:43:38.372] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0912 16:43:38.372] Pod Template:
I0912 16:43:38.373]   Labels:  app=guestbook
I0912 16:43:38.373]            tier=frontend
I0912 16:43:38.373]   Containers:
I0912 16:43:38.373]    php-redis:
I0912 16:43:38.373]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0912 16:43:38.374]   Type    Reason            Age   From                    Message
I0912 16:43:38.374]   ----    ------            ----  ----                    -------
I0912 16:43:38.374]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-khxkg
I0912 16:43:38.374]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-t5229
I0912 16:43:38.374]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-5g9pg
I0912 16:43:38.375] (B
W0912 16:43:38.475] E0912 16:43:31.440853   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.476] E0912 16:43:31.536642   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.476] E0912 16:43:32.255620   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.476] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0912 16:43:38.477] I0912 16:43:32.309679   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"a5997634-a32c-4f25-b39f-739fa8837476", APIVersion:"apps/v1", ResourceVersion:"1480", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
W0912 16:43:38.477] I0912 16:43:32.314895   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"dd672421-4fa0-44fb-884e-c26d598e2006", APIVersion:"apps/v1", ResourceVersion:"1481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-fdbk5
W0912 16:43:38.478] I0912 16:43:32.318684   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"dd672421-4fa0-44fb-884e-c26d598e2006", APIVersion:"apps/v1", ResourceVersion:"1481", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-zqf6h
W0912 16:43:38.478] E0912 16:43:32.341139   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.479] E0912 16:43:32.442196   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.479] E0912 16:43:32.538177   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.479] E0912 16:43:33.257094   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.479] I0912 16:43:33.300195   49456 controller.go:606] quota admission added evaluator for: daemonsets.apps
W0912 16:43:38.480] I0912 16:43:33.309407   49456 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
W0912 16:43:38.480] E0912 16:43:33.342556   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.480] E0912 16:43:33.443671   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.481] E0912 16:43:33.539689   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.481] E0912 16:43:34.258406   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.481] E0912 16:43:34.344059   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.482] E0912 16:43:34.445067   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.482] E0912 16:43:34.541128   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.482] E0912 16:43:35.260010   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.483] E0912 16:43:35.345421   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.483] E0912 16:43:35.446706   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.483] E0912 16:43:35.542213   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.484] E0912 16:43:36.261343   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.484] E0912 16:43:36.346740   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.484] E0912 16:43:36.448215   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.485] E0912 16:43:36.543333   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.490] E0912 16:43:36.772797   52998 daemon_controller.go:302] namespace-1568306614-25637/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1568306614-25637", SelfLink:"/apis/apps/v1/namespaces/namespace-1568306614-25637/daemonsets/bind", UID:"0fffb51b-28d2-4f30-8ae3-33d6f189b7ab", ResourceVersion:"1550", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63703903414, loc:(*time.Location)(0x7751020)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1568306614-25637\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0010b0ec0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f501d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c84cc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0010b0f00), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000c826c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001f5022c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0912 16:43:38.491] E0912 16:43:37.262829   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.491] E0912 16:43:37.348399   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.491] E0912 16:43:37.449735   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.492] E0912 16:43:37.544915   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.492] I0912 16:43:37.572121   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"1bb1eec1-93ae-40c9-883e-6432400676d7", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t727v
W0912 16:43:38.493] I0912 16:43:37.575652   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"1bb1eec1-93ae-40c9-883e-6432400676d7", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dcrpt
W0912 16:43:38.493] I0912 16:43:37.575966   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"1bb1eec1-93ae-40c9-883e-6432400676d7", APIVersion:"v1", ResourceVersion:"1559", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4v5rx
W0912 16:43:38.494] I0912 16:43:37.985985   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"f3169289-9a76-4751-bc3c-b5feb9b7ea8a", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-khxkg
W0912 16:43:38.494] I0912 16:43:37.988732   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"f3169289-9a76-4751-bc3c-b5feb9b7ea8a", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t5229
W0912 16:43:38.494] I0912 16:43:37.989161   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"f3169289-9a76-4751-bc3c-b5feb9b7ea8a", APIVersion:"v1", ResourceVersion:"1575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5g9pg
W0912 16:43:38.495] E0912 16:43:38.264225   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.495] E0912 16:43:38.350150   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.495] E0912 16:43:38.450998   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:38.547] E0912 16:43:38.546679   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:38.647] core.sh:1065: Successful describe
I0912 16:43:38.648] Name:         frontend
I0912 16:43:38.648] Namespace:    namespace-1568306617-1698
I0912 16:43:38.648] Selector:     app=guestbook,tier=frontend
I0912 16:43:38.648] Labels:       app=guestbook
I0912 16:43:38.648]               tier=frontend
I0912 16:43:38.648] Annotations:  <none>
I0912 16:43:38.648] Replicas:     3 current / 3 desired
I0912 16:43:38.648] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0912 16:43:38.648] Pod Template:
I0912 16:43:38.649]   Labels:  app=guestbook
I0912 16:43:38.649]            tier=frontend
I0912 16:43:38.649]   Containers:
I0912 16:43:38.649]    php-redis:
I0912 16:43:38.649]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0912 16:43:38.650] Namespace:    namespace-1568306617-1698
I0912 16:43:38.650] Selector:     app=guestbook,tier=frontend
I0912 16:43:38.650] Labels:       app=guestbook
I0912 16:43:38.650]               tier=frontend
I0912 16:43:38.651] Annotations:  <none>
I0912 16:43:38.651] Replicas:     3 current / 3 desired
I0912 16:43:38.651] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0912 16:43:38.651] Pod Template:
I0912 16:43:38.651]   Labels:  app=guestbook
I0912 16:43:38.651]            tier=frontend
I0912 16:43:38.651]   Containers:
I0912 16:43:38.651]    php-redis:
I0912 16:43:38.651]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0912 16:43:38.754] Namespace:    namespace-1568306617-1698
I0912 16:43:38.754] Selector:     app=guestbook,tier=frontend
I0912 16:43:38.755] Labels:       app=guestbook
I0912 16:43:38.755]               tier=frontend
I0912 16:43:38.755] Annotations:  <none>
I0912 16:43:38.755] Replicas:     3 current / 3 desired
I0912 16:43:38.755] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0912 16:43:38.755] Pod Template:
I0912 16:43:38.755]   Labels:  app=guestbook
I0912 16:43:38.756]            tier=frontend
I0912 16:43:38.756]   Containers:
I0912 16:43:38.756]    php-redis:
I0912 16:43:38.756]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0912 16:43:38.867] Namespace:    namespace-1568306617-1698
I0912 16:43:38.868] Selector:     app=guestbook,tier=frontend
I0912 16:43:38.868] Labels:       app=guestbook
I0912 16:43:38.868]               tier=frontend
I0912 16:43:38.868] Annotations:  <none>
I0912 16:43:38.868] Replicas:     3 current / 3 desired
I0912 16:43:38.868] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0912 16:43:38.868] Pod Template:
I0912 16:43:38.868]   Labels:  app=guestbook
I0912 16:43:38.869]            tier=frontend
I0912 16:43:38.869]   Containers:
I0912 16:43:38.869]    php-redis:
I0912 16:43:38.869]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0912 16:43:38.980] Namespace:    namespace-1568306617-1698
I0912 16:43:38.980] Selector:     app=guestbook,tier=frontend
I0912 16:43:38.980] Labels:       app=guestbook
I0912 16:43:38.980]               tier=frontend
I0912 16:43:38.980] Annotations:  <none>
I0912 16:43:38.981] Replicas:     3 current / 3 desired
I0912 16:43:38.981] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0912 16:43:38.981] Pod Template:
I0912 16:43:38.981]   Labels:  app=guestbook
I0912 16:43:38.981]            tier=frontend
I0912 16:43:38.981]   Containers:
I0912 16:43:38.981]    php-redis:
I0912 16:43:38.981]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0912 16:43:39.091] Namespace:    namespace-1568306617-1698
I0912 16:43:39.091] Selector:     app=guestbook,tier=frontend
I0912 16:43:39.092] Labels:       app=guestbook
I0912 16:43:39.092]               tier=frontend
I0912 16:43:39.092] Annotations:  <none>
I0912 16:43:39.092] Replicas:     3 current / 3 desired
I0912 16:43:39.092] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0912 16:43:39.092] Pod Template:
I0912 16:43:39.092]   Labels:  app=guestbook
I0912 16:43:39.092]            tier=frontend
I0912 16:43:39.092]   Containers:
I0912 16:43:39.092]    php-redis:
I0912 16:43:39.092]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0912 16:43:39.093]   ----    ------            ----  ----                    -------
I0912 16:43:39.094]   Normal  SuccessfulCreate  2s    replication-controller  Created pod: frontend-khxkg
I0912 16:43:39.094]   Normal  SuccessfulCreate  2s    replication-controller  Created pod: frontend-t5229
I0912 16:43:39.094]   Normal  SuccessfulCreate  2s    replication-controller  Created pod: frontend-5g9pg
I0912 16:43:39.181] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 3
I0912 16:43:39.278] (Breplicationcontroller/frontend scaled
W0912 16:43:39.378] E0912 16:43:39.265425   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:39.379] I0912 16:43:39.283413   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"f3169289-9a76-4751-bc3c-b5feb9b7ea8a", APIVersion:"v1", ResourceVersion:"1586", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-khxkg
W0912 16:43:39.379] E0912 16:43:39.351414   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:39.452] E0912 16:43:39.452316   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:39.548] E0912 16:43:39.548122   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:39.577] error: Expected replicas to be 3, was 2
I0912 16:43:39.678] core.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I0912 16:43:39.678] (Bcore.sh:1087: Successful get rc frontend {{.spec.replicas}}: 2
I0912 16:43:39.678] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 2
I0912 16:43:39.763] (Bcore.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0912 16:43:39.841] (Breplicationcontroller/frontend scaled
I0912 16:43:39.936] core.sh:1099: Successful get rc frontend {{.spec.replicas}}: 3
I0912 16:43:40.020] (Bcore.sh:1103: Successful get rc frontend {{.spec.replicas}}: 3
I0912 16:43:40.097] (Breplicationcontroller/frontend scaled
I0912 16:43:40.193] core.sh:1107: Successful get rc frontend {{.spec.replicas}}: 2
I0912 16:43:40.269] (Breplicationcontroller "frontend" deleted
W0912 16:43:40.370] I0912 16:43:39.843990   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"f3169289-9a76-4751-bc3c-b5feb9b7ea8a", APIVersion:"v1", ResourceVersion:"1592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7bp62
W0912 16:43:40.370] I0912 16:43:40.099973   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"f3169289-9a76-4751-bc3c-b5feb9b7ea8a", APIVersion:"v1", ResourceVersion:"1597", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7bp62
W0912 16:43:40.371] E0912 16:43:40.266703   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:40.371] E0912 16:43:40.352793   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:40.438] I0912 16:43:40.437879   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"redis-master", UID:"0d4770e2-c9dc-4d2d-b46f-2a3db7bb0359", APIVersion:"v1", ResourceVersion:"1608", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-nqk7g
W0912 16:43:40.454] E0912 16:43:40.453508   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:40.549] E0912 16:43:40.549340   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:40.595] I0912 16:43:40.594546   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"redis-slave", UID:"e8a7bfd5-3bb2-4cef-b422-3cc2fba9474d", APIVersion:"v1", ResourceVersion:"1613", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-77gkf
W0912 16:43:40.598] I0912 16:43:40.598265   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"redis-slave", UID:"e8a7bfd5-3bb2-4cef-b422-3cc2fba9474d", APIVersion:"v1", ResourceVersion:"1613", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-ffbmh
W0912 16:43:40.687] I0912 16:43:40.686403   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"redis-master", UID:"0d4770e2-c9dc-4d2d-b46f-2a3db7bb0359", APIVersion:"v1", ResourceVersion:"1620", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-rcl5m
W0912 16:43:40.689] I0912 16:43:40.688418   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"redis-master", UID:"0d4770e2-c9dc-4d2d-b46f-2a3db7bb0359", APIVersion:"v1", ResourceVersion:"1620", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-nwgzn
W0912 16:43:40.690] I0912 16:43:40.689422   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"redis-slave", UID:"e8a7bfd5-3bb2-4cef-b422-3cc2fba9474d", APIVersion:"v1", ResourceVersion:"1622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-9tnvl
W0912 16:43:40.690] I0912 16:43:40.689703   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"redis-master", UID:"0d4770e2-c9dc-4d2d-b46f-2a3db7bb0359", APIVersion:"v1", ResourceVersion:"1620", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-2qpr4
... skipping 12 lines ...
W0912 16:43:41.271] I0912 16:43:41.087466   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment-6986c7bc94", UID:"25faa8a7-332f-4bc9-8670-da479f140c35", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-vkbh2
W0912 16:43:41.271] I0912 16:43:41.089989   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment-6986c7bc94", UID:"25faa8a7-332f-4bc9-8670-da479f140c35", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-trtgs
W0912 16:43:41.271] I0912 16:43:41.090417   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment-6986c7bc94", UID:"25faa8a7-332f-4bc9-8670-da479f140c35", APIVersion:"apps/v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-vvjtb
W0912 16:43:41.272] I0912 16:43:41.173064   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment", UID:"a2e205ae-e01f-47c1-a1ad-abd3d18e22bd", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
W0912 16:43:41.272] I0912 16:43:41.187503   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment-6986c7bc94", UID:"25faa8a7-332f-4bc9-8670-da479f140c35", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-vvjtb
W0912 16:43:41.273] I0912 16:43:41.189008   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment-6986c7bc94", UID:"25faa8a7-332f-4bc9-8670-da479f140c35", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-vkbh2
W0912 16:43:41.273] E0912 16:43:41.268092   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:41.355] E0912 16:43:41.354530   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:41.455] E0912 16:43:41.454502   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:41.551] E0912 16:43:41.550555   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:41.651] core.sh:1127: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
I0912 16:43:41.652] (Bdeployment.apps "nginx-deployment" deleted
I0912 16:43:41.652] Successful
I0912 16:43:41.652] message:service/expose-test-deployment exposed
I0912 16:43:41.652] has:service/expose-test-deployment exposed
I0912 16:43:41.652] service "expose-test-deployment" deleted
I0912 16:43:41.652] Successful
I0912 16:43:41.652] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0912 16:43:41.653] See 'kubectl expose -h' for help and examples
I0912 16:43:41.653] has:invalid deployment: no selectors
I0912 16:43:41.770] deployment.apps/nginx-deployment created
I0912 16:43:41.871] core.sh:1146: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
I0912 16:43:41.955] (Bservice/nginx-deployment exposed
I0912 16:43:42.045] core.sh:1150: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
I0912 16:43:42.121] (Bdeployment.apps "nginx-deployment" deleted
I0912 16:43:42.129] service "nginx-deployment" deleted
W0912 16:43:42.230] I0912 16:43:41.778534   52998 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment", UID:"71b9d4b1-2cce-4b16-9bbb-f50a68dc1888", APIVersion:"apps/v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
W0912 16:43:42.230] I0912 16:43:41.782641   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment-6986c7bc94", UID:"b9b67fb9-ca44-463f-96bc-3a15911fe650", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-t4fbf
W0912 16:43:42.230] I0912 16:43:41.784603   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment-6986c7bc94", UID:"b9b67fb9-ca44-463f-96bc-3a15911fe650", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-ppnvc
W0912 16:43:42.231] I0912 16:43:41.786174   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1568306617-1698", Name:"nginx-deployment-6986c7bc94", UID:"b9b67fb9-ca44-463f-96bc-3a15911fe650", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-spn78
W0912 16:43:42.270] E0912 16:43:42.269657   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:42.290] I0912 16:43:42.289383   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"65a8e9a5-14e6-497b-a00d-7f4f096eb9d5", APIVersion:"v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n9jlg
W0912 16:43:42.293] I0912 16:43:42.292327   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"65a8e9a5-14e6-497b-a00d-7f4f096eb9d5", APIVersion:"v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8wmm7
W0912 16:43:42.294] I0912 16:43:42.293065   52998 event.go:255] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1568306617-1698", Name:"frontend", UID:"65a8e9a5-14e6-497b-a00d-7f4f096eb9d5", APIVersion:"v1", ResourceVersion:"1721", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zqpb9
W0912 16:43:42.356] E0912 16:43:42.355908   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:42.456] E0912 16:43:42.456186   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:42.552] E0912 16:43:42.551717   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:42.653] replicationcontroller/frontend created
I0912 16:43:42.654] core.sh:1157: Successful get rc frontend {{.spec.replicas}}: 3
I0912 16:43:42.654] (Bservice/frontend exposed
I0912 16:43:42.654] core.sh:1161: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0912 16:43:42.654] (Bservice/frontend-2 exposed
I0912 16:43:42.721] core.sh:1165: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
... skipping 8 lines ...
I0912 16:43:43.542] service "frontend" deleted
I0912 16:43:43.549] service "frontend-2" deleted
I0912 16:43:43.555] service "frontend-3" deleted
I0912 16:43:43.561] service "frontend-4" deleted
I0912 16:43:43.567] service "frontend-5" deleted
I0912 16:43:43.655] Successful
I0912 16:43:43.655] message:error: cannot expose a Node
I0912 16:43:43.656] has:cannot expose
I0912 16:43:43.743] Successful
I0912 16:43:43.744] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0912 16:43:43.744] has:metadata.name: Invalid value
W0912 16:43:43.845] E0912 16:43:43.271071   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:43.845] E0912 16:43:43.357405   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:43.845] E0912 16:43:43.457447   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0912 16:43:43.846] E0912 16:43:43.552739   52998 reflector.go:123] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0912 16:43:43.946] Successful
I0912 16:43:43.947] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
I0912 16:43:43.